By Topic

Intelligent Systems (IS), 2012 6th IEEE International Conference

Date 6-8 Sept. 2012

Filter Results

Displaying Results 1 - 25 of 154
  • Foreword

    Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (46 KB)  
    Freely Available from IEEE
  • Theories, design and applications of contemporary intelligent systems - message from the editors

    Page(s): 1 - 2
    Save to Project icon | Request Permissions | PDF file iconPDF (90 KB)  
    Freely Available from IEEE
  • New trends in the development of foundational and practical bases for intelligent applications - message from the editors

    Page(s): 1 - 2
    Save to Project icon | Request Permissions | PDF file iconPDF (78 KB)  
    Freely Available from IEEE
  • Prioritized aggregation operators and their applications

    Page(s): 2 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (170 KB) |  | HTML iconHTML  

    Multicriteria decision problems having a prioritization relationship between the criteria arises in situations in which a decrease in satisfaction to a higher priority criteria can't be readily compensated by an increase in satisfaction to a lower priority criteria. A typical example of this is the relationship between safety and cost. We consider criteria aggregation procedures for use in this kind of situation. We suggest that the prioritization between criteria can be modeled by making the weights associated with a criterion dependent upon the satisfaction of the higher priority criteria. We implement this using a prioritized scoring operator. We show how the lack of satisfaction to a higher order criteria blocks the possibility of compensation by lower priority criteria. We show that in the special case where the prioritization relationship among the criteria satisfies a linear ordering we can use a prioritized averaging operator. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Applying intelligent systems in industry: A realistic overview

    Page(s): 6 - 20
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (379 KB) |  | HTML iconHTML  

    The objective of this paper is to give a realistic overview of the current state of the art of intelligent systems in industry based on the experience from applying these systems in a large global corporation. It includes a short analysis of the differences between academic and industrial research, examples of the key implementation areas of intelligent systems in manufacturing and business, a discussion about the main factors for success and failure of industrial intelligent systems, an estimate of the projected industrial needs that may drive future applications of intelligent systems, and some ideas how to improve academic-industrial collaboration in this research area. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bipolarity in preferences and intentions for more human consistent decision analysis and database querying

    Page(s): 21 - 26
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (154 KB) |  | HTML iconHTML  

    The concept of a bipolar query, meant as a database query that involves both mandatory and optional conditions, is discussed from the point of view of, first, flexible database querying and, second, some newer approaches to decision making involving affects and judgments, unconventional multicriteria decision making and the modeling of sophisticated user's intentions and preferences of a positive and negative character. Aggregation of the matching degrees against the negative and positive conditions to derive an overall matching degree is considered in the fuzzy logic and possibilistic setting. Moreover, the use of a multiple valued logic based formalism for the representation of positive and negative desires in the context of intention modeling is advocated. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evolving spiking neural networks for spatio-and spectro-temporal pattern recognition

    Page(s): 27 - 32
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (296 KB) |  | HTML iconHTML  

    This paper provides a survey on the evolution of the evolving connectionist systems (ECOS) paradigm, from simple ECOS introduced in 1998 to evolving spiking neural networks (eSNN) and neurogenetic systems. It presents methods for their use for spatio-and spectro temporal pattern recognition. Future directions are highlighted. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Advances in fuzzy systems and networks

    Page(s): 33 - 40
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (702 KB) |  | HTML iconHTML  

    This plenary paper describes two novel fuzzy modelling methodologies for complex processes that are characterised by uncertainty, dimensionality and structure. The first methodology is based on rule base compression in fuzzy systems whereby model efficiency is improved without loss of model accuracy. The second methodology is based on fuzzy networks with modular rule bases whereby model transparency is improved at minimal loss of model accuracy. The two methodologies are validated comparatively on several case studies. The comparison shows the overall superiority of these methodologies to the current methodologies of rule base reduction in fuzzy systems and fuzzy networks with chained rule bases. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Battery aging detection based on sequential clustering and similarity analysis

    Page(s): 42 - 47
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (481 KB) |  | HTML iconHTML  

    The battery cells are an important part of electric and hybrid vehicles and their deterioration due to aging directly affects the life cycle and performance of the whole battery system. Therefore an early aging detection of the battery cell is an important task and its correct solution could significantly improve the whole vehicle performance. This paper presents a computational strategy for battery aging detection, based on available data chunks from real operation of the vehicle. The first step is to aggregate (reduce) the original large amount of data by much smaller number of cluster centers. This is done by a newly proposed sequential clustering algorithm that arranges the clusters in decreasing order of their volumes. The next step is the proposed fuzzy inference procedure for weighed approximation of the cluster centers that creates comparable one dimensional fuzzy model for each available data set. Finally, the detection of the aged battery is treated as a similarity analysis problem, in which the pair distances between all battery cells are estimated by analyzing the predicted values from the respective fuzzy models. All these three steps of the computational procedure are explained in the paper and applied to real experimental data for battery aging detection. The results are positive and suggestions for further improvements are made in the conclusions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Recognition of Old Cyrillic Slavic letters: Decision tree versus fuzzy classifier experiments

    Page(s): 48 - 53
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1288 KB) |  | HTML iconHTML  

    In this paper we are comparing two methods for classification of Old Slavic Cyrillic characters. Traditional character recognition programs cannot be applied to the Old Slavic Church manuscripts due to the specific characteristics of these characters. The first classification method is based on a decision tree and the second one uses a fuzzy classifier. Both methods employ the same set of features extracted from the characters bitmaps. The prototypes are obtained by applying the logical operators on the samples of digitalized characters from original manuscripts. As features for defining a particular character we use number and position of spots in the outer segments, presence and position of beams and columns in horizontal and vertical segments respectively, compactness and symmetry. The fuzzy classifier creates a prototype consisting of fuzzy rules by means of fuzzy aggregation of character features. The classifier based on a decision tree is realized by a set of rules. During the creation of the classifier, several splitting measures are tested. We have created an application that implements the proposed classifiers and have experimentally tested their efficiency. Experimental results show that both classifiers correctly recognize about 50% of the characters. For 10% of the samples both classifiers make the same error and for 11% of characters the predicted character is incorrect and different. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • H-FQL: A new reinforcement learning method for automatic hierarchization of fuzzy systems: An application to the route choice problem

    Page(s): 54 - 59
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (409 KB) |  | HTML iconHTML  

    This paper proposes a new approach of automatic hierarchization for monolithic fuzzy systems based on an extension of the fuzzy Q-learning method. This approach contributes to the reduction of the fuzzy rules base without recourse to expert knowledge. It suggests firstly a new technique of automatic structural hierarchization, which advocates the association of the most correlated input variables' pairs through the statistical study of the samples' base. It also proposes the auto-generation of rules' bases using an adaptation of the Fuzzy Q-Learning (FQL) to the Hierarchical Fuzzy Systems (HFS). Finally, we applied the proposed approach to hierarchize an adaptive monolithic fuzzy system dealing with the route choice problem. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Speckle noise reduction using an interval type-2 fuzzy sets filter

    Page(s): 60 - 66
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1823 KB) |  | HTML iconHTML  

    Fuzzy sets which capture the meaning representation of linguistic variables have been widely used in image processing in the last decades. Fuzzy sets are associated with vagueness which is type 1 uncertainty. Interval-valued fuzzy sets (IVFS) are associated with type 2 semantic uncertainty. Indeed, the length of the interval provides the "`non-specifity" measure for IVFS. We investigate this particular information measure applied to low-level image processing. This method can be used for both smoothing and noise filtering and applications in speckle noise reduction show the interest of this concept. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A no-reference computer-generated images quality metric and its application to denoising

    Page(s): 67 - 73
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1702 KB) |  | HTML iconHTML  

    A no-reference image quality metric detecting both blur and noise is proposed in this paper. The proposed metric is based on IFS2 entropy applied on computer-generated images and does not require any edge detection. Its value drops either when the test image becomes blurred or corrupted by random noise. It can be thought of as an indicator of the signal to noise ratio of the image. Experiments using synthetic, natural and computer-generated images are presented to demonstrate the effectiveness and robustness of this metric. The proposed measure has been too compared with full-reference quality measures (or faithfullness measures) like SSIM and gives satisfactory performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Potential field based formation control in trajectory tracking and obstacle avoidance tasks

    Page(s): 76 - 81
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (291 KB) |  | HTML iconHTML  

    An approach using potential functions applied to formation control (including aggregation, collision-free goal following behavior, and trajectory tracking one) is proposed. The formation control is considered as a special form of agent aggregation, where the final aggregated form has to constitute a particular previously determined geometrical configuration that is defined by a set of desired interagent distance values. This is achieved by defining an appropriate potential function, which reaches global minimum at the desired shape of formation. The innovation in this paper is the modification of the potential function by adding new parts that take into consideration: (1) the desired orientation of the formation as a whole; (2) obstacle avoidance behavior; and (3) goal following one. The control strategy is based on forcing the motion of the agents along the negative gradient of the potential field and taking into account the agents' non-holonomic kinematics. The validation of the proposed formation control is confirmed by simulations in MATLAB environment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improving the grain quality assessment fusing data from image and spectra analyses

    Page(s): 82 - 89
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (954 KB) |  | HTML iconHTML  

    The paper presents approaches, methods and tools for assessment of main quality features of grain samples that are based on color image and spectra analyses. Visible features like grain color, shape, and dimensions are extracted from the object images. Information about object color and surface texture is obtained from the object spectral characteristics. The categorization of the grain sample elements in three quality groups is accomplished using two data fusion approaches. The first approach is based on the fusion of the results about object color and shape characteristics obtained using image analysis only. The second approach fuses the shape data obtained by image analysis and the color and surface texture data obtained by spectra analysis. The results obtained by the two data fusion approaches are compared. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hybrid Bayesian fusion of range-based and sourceless location estimates under varying observability

    Page(s): 90 - 95
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (637 KB) |  | HTML iconHTML  

    This paper proposes a hybrid Bayesian approach to multi-sensor data fusion for 3D localization. The approach addresses the problem of fusing range-based and sourceless localization estimates under conditions of varying observability in the range-based sub-system. The proposed localization approach uses a mixture of Single Hypothesis Tracking filtering (SHT) (e.g. Kalman filter) and Sequential Monte Carlo (SMC) filtering to improve accuracy under conditions of varying observability. Under conditions of sufficient or no range measurements, a SHT approach is used. Under conditions of insufficient range measurements (i.e. 1 or 2 ranges), SMC filtering is used, since it more accurately models the distribution of real error in the estimated positions using Gaussian mixtures rather that a single Gaussian. The results show up to a 10% improvement in 3D position estimation as compared to a Single-Constraint-at-a-Time (SCAAT) approach and up to a 24% improvement compared to an Extended Kalman Filter approach for intermittent 3 second partial range occlusions when tracking human arm movements. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Extended object tracking with convolution particle filtering

    Page(s): 96 - 101
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (575 KB) |  | HTML iconHTML  

    This paper proposes a sequential Monte Carlo filter (particle filter) for state and parameter estimation of dynamic systems. It is applied to the problem of extended object tracking in the presence of dense clutter. The unknown length of a stick-shape object is estimated in addition to the kinematic parameters. The kernel density estimation technique is utilised to approximate the joint posterior density of target state and static size parameters. The convolution particle filtering approach is validated on a Poisson model for the measurements, originating from the target and clutter. Examples illustrating the filter performance are presented. Simulation results show that the convolution particle filter provides accurate on-line tracking, with very good estimates both for the target kinematic states and for the parameters of the target extent. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reducing speech recognition costs: By compressing the input data

    Page(s): 102 - 107
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (352 KB) |  | HTML iconHTML  

    One of the key constraints of using embedded speech recognition modules is the required computational power. To decrease this requirement, we propose an algorithm that clusters the speech signal before passing it to the recognition units. The algorithm is based on agglomerative clustering and produces a sequence of compressed frames, optimized for recognition. Our experimental results indicate that the proposed method presents a frame rate with average 40 frames per second on medium to large vocabulary isolated word recognition tasks without loss of recognition accuracy which result in up to 60% faster recognition in compare to usual 100 fps fixed frame rate sampling. This value is quite close to the theoretically optimal value of 37.5 frames per second while the best result of former approaches is about 60 frames per second. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the behavior of Dempster's rule of combination and the foundations of Dempster-Shafer Theory

    Page(s): 108 - 113
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (102 KB) |  | HTML iconHTML  

    On the base of simple emblematic example we analyze and explain the inconsistent and inadequate behavior of Dempster-Shafer's rule of combination as a valid method to combine sources of evidences. We identify the cause and the effect of the dictatorial power behavior of this rule and of its impossibility to manage the conflicts between the sources. For a comparison purpose, we present the respective solution obtained by the more efficient PCR5 fusion rule proposed originally in Dezert-Smarandache Theory framework. Finally, we identify and prove the inherent contradiction of Dempster-Shafer Theory foundations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Human integration of motion and texture information in visual slant estimation

    Page(s): 114 - 119
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (194 KB) |  | HTML iconHTML  

    The present research is aimed to: (i) characterize the ability of human visual system to define the objects' slant on the base of combination of visual stimulus characteristics, that in general are uncertain and even conflicting. (ii) evaluate the influence of human age on visual cues assessment and processing; (iii) estimate the process of human visual cue integration based on the well known Normalized Conjunctive Consensus and Averaging fusion rules, as well on the base of more efficient probabilistic Proportional Conflict Redistribution rule no.5 defined within Dezert-Smarandache Theory for plausible and paradoxical reasoning. The impact of research is focused on the ability of these fusion rules to predict in adequate way the behavior of individuals, as well as age-contingent groups of individuals in visual cue integration process. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Intelligent alarm classification based on DSmT

    Page(s): 120 - 125
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (411 KB) |  | HTML iconHTML  

    In this paper the critical issue of alarms' classification and prioritization (in terms of degree of danger) is considered and realized on the base of Proportional Conflict Redistribution rule no.5, defined in Dezert-Smarandache Theory of plausible and paradoxical reasoning. The results obtained show the strong ability of this rule to take care in a coherent and stable way for the evolution of all possible degrees of danger, relating to a set of a priori defined, out of the ordinary dangerous directions. A comparison with Dempster's rule performance is also provided. Dempster's rule shows weakness in resolving the cases examined. In Emergency case Dempster's rule does not respond to the level of conflicts between sound sources, leading that way to ungrounded decisions. In case of lowest danger's priority (perturbed Warning mode), Dempster's rule could cause a false alarm and can deflect the attention from the existing real dangerous source by assigning a wrong steering direction to the surveillance camera. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A PCA-based technique for compensating the effect of sensor position changes in motion data

    Page(s): 126 - 131
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1292 KB) |  | HTML iconHTML  

    Failure to place markers/sensors accurately when motion capture the human body is probably the single greatest contributor to measurement variability in rehabilitation applications. In this paper, we study the effect of inadvertent changes in the position of sensors while evaluating similarities between similar actions in motion capturing. We then use a Principal Component Analysis (PCA) based technique followed by additional mechanisms to compensate for the effect of sensor position changes within the motion data. The collected data from our designed experiments show differences between similar actions with different marker placement. By applying signal processing, we show how these variations can be compensated for and so provide more accurate motion data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distance measure adaptation based on local feature weighting

    Page(s): 132 - 137
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (318 KB) |  | HTML iconHTML  

    The performance of Nearest Neighbor (NN) classifier is highly dependent on the distance function used to find the NN of an input test pattern. Many of the proposed algorithms try to optimize the accuracy of the NN rule using a weighted distance function. Here, in the proposed method the distance function is defined in a parametric form to incorporate the local relevancy of the features in the decision boundary of the prototype. The local weight of each feature is determined according to the amount of information it provides about discrimination of different classes for each prototype. In this method a novel learning algorithm tunes the weight vector of the prototypes. The learning method uses an entropy based objective function that is optimized by a gradient-descent technique. A new entropy measure is proposed in which the decision boundary of a prototype is a fuzzy region. We show that our scheme has comparable or better performance than some recent methods proposed in the literature. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cognitive workload and affective state: A computational study using Bayesian networks

    Page(s): 140 - 145
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (253 KB) |  | HTML iconHTML  

    This paper uses Bayesian networks to investigate the impact of three different kind of inputs, namely, physiological, cognitive and affect features, on workload estimation, from a computational point of view. The ability of the proposed models to infer the workload variation of subjects involved in successive tasks demanding different levels of cognitive resources is discussed, in term of two criteria to be jointly optimized: the diversity, i.e. the ability of the model to perform on different subjects, and the accuracy, i.e., how close from the (subjectively estimated) workload level the model prediction is. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A TSK-based fuzzy system for telecommunications time-series forecasting

    Page(s): 146 - 151
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (244 KB) |  | HTML iconHTML  

    A two-stage model-building process for generating a Takagi-Sugeno-Kang fuzzy forecasting system is proposed in this paper. Particularly, the Subtractive Clustering (SC) method is first employed to partition the input space and determine the number of fuzzy rules and the premise parameters. In the sequel, an Orthogonal Least Squares (OLS) estimator determines the input terms which should be included in the consequent part of each fuzzy rule and calculate their parameters. A comparative analysis with well-established forecasting models is conducted on real world tele-communications data, in order to investigate the forecasting capabilities of the proposed scheme. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.