By Topic

Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on

Issue 2 • Date April 2004

Filter Results

Displaying Results 1 - 25 of 54
  • Table of contents

    Publication Year: 2004 , Page(s): c1 - 805
    Save to Project icon | Request Permissions | PDF file iconPDF (116 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Systems, Man, and Cybernetics—Part B: Cybernetics publication information

    Publication Year: 2004 , Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (34 KB)  
    Freely Available from IEEE
  • Modeling and convergence analysis of distributed coevolutionary algorithms

    Publication Year: 2004 , Page(s): 806 - 822
    Cited by:  Papers (9)  |  Patents (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (824 KB) |  | HTML iconHTML  

    A theoretical foundation is presented for modeling and convergence analysis of a class of distributed coevolutionary algorithms applied to optimization problems in which the variables are partitioned among p nodes. An evolutionary algorithm at each of the p nodes performs a local evolutionary search based on its own set of primary variables, and the secondary variable set at each node is clamped during this phase. An infrequent intercommunication between the nodes updates the secondary variables at each node. The local search and intercommunication phases alternate, resulting in a cooperative search by the p nodes. First, we specify a theoretical basis for a class of centralized evolutionary algorithms in terms of construction and evolution of sampling distributions over the feasible space. Next, this foundation is extended to develop a model for a class of distributed coevolutionary algorithms. Convergence and convergence rate analyses are pursued for basic classes of objective functions. Our theoretical investigation reveals that for certain unimodal and multimodal objectives, we can expect these algorithms to converge at a geometrical rate. The distributed coevolutionary algorithms are of most interest from the perspective of their performance advantage compared to centralized algorithms, when they execute in a network environment with significant local access and internode communication delays. The relative performance of these algorithms is therefore evaluated in a distributed environment with realistic parameters of network behavior. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The impact of countermeasure propagation on the prevalence of computer viruses

    Publication Year: 2004 , Page(s): 823 - 833
    Cited by:  Papers (23)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (580 KB) |  | HTML iconHTML  

    Countermeasures such as software patches or warnings can be effective in helping organizations avert virus infection problems. However, current strategies for disseminating such countermeasures have limited their effectiveness. We propose a new approach, called the countermeasure competing (CMC) strategy, and use computer simulation to formally compare its relative effectiveness with three antivirus strategies currently under consideration. CMC is based on the idea that computer viruses and countermeasures spread through two separate but interlinked complex networks - the virus-spreading network and the countermeasure-propagation network, in which a countermeasure acts as a competing species against the computer virus. Our results show that CMC is more effective than other strategies based on the empirical virus data. The proposed CMC reduces the size of virus infection significantly when the countermeasure-propagation network has properties that favor countermeasures over viruses, or when the countermeasure-propagation rate is higher than the virus-spreading rate. In addition, our work reveals that CMC can be flexibly adapted to different uncertainties in the real world, enabling it to be tuned to a greater variety of situations than other strategies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mining Pinyin-to-character conversion rules from large-scale corpus: a rough set approach

    Publication Year: 2004 , Page(s): 834 - 844
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (320 KB) |  | HTML iconHTML  

    The paper introduces a rough set technique for solving the problem of mining Pinyin-to-character (PTC) conversion rules. It first presents a text-structuring method by constructing a language information table from a corpus for each pinyin, which it will then apply to a free-form textual corpus. Data generalization and rule extraction algorithms can then be used to eliminate redundant information and extract consistent PTC conversion rules. The design of our model also addresses a number of important issues such as the long-distance dependency problem, the storage requirements of the rule base, and the consistency of the extracted rules, while the performance of the extracted rules as well as the effects of different model parameters are evaluated experimentally. These results show that by the smoothing method, high precision conversion (0.947) and recall rates (0.84) can be achieved even for rules represented directly by pinyin rather than words. A comparison with the baseline tri-gram model also shows good complement between our method and the tri-gram language model. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Run-length chain coding and scalable computation of a shape's moments using reconfigurable optical buses

    Publication Year: 2004 , Page(s): 845 - 855
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (381 KB) |  | HTML iconHTML  

    The main contribution of this paper is the design of several efficient algorithms for modified run-length chain coding and for computing a shape's moments on arrays with reconfigurable optical buses. The proposed algorithms are based on the boundary representation of an object. Instead of using chain code, the boundary can be represented by a modified run-length chain code, where each entity represents a line segment (two adjacent corner pixels). The sequential nature of the chain code makes it difficult to be parallelized. We first propose two constant time algorithms for boundary extraction and run-length chain coding. To the authors' knowledge, these are the most time efficient algorithms yet published. Based on the modified run-length chain coding, and the advantages of both optical transmission and electronic computation, a constant time parallel algorithm for computing a shape's moments using N×N processors is proposed. Additionally, instead of using N×N processors, a scalable moment algorithm using r×r processors is also derived, where r View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient three-dimensional metric object modeling from uncalibrated image sequences

    Publication Year: 2004 , Page(s): 856 - 876
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1372 KB) |  | HTML iconHTML  

    This paper presents a scheme that addresses the practical issues associated with producing a geometric model of a scene using a passive sensing technique. The proposed image-based scheme comprises a recursive structure recovery method and a recursive surface reconstruction technique. The former method employs a robust corner-tracking algorithm that copes with the appearance and disappearance of features and a corner-based structure and motion estimation algorithm that handles the inclusion and expiration of features. The novel formulation and dual extended Kalman filter computational framework of the estimation algorithm provide an efficient approach to metric structure recovery that does not require any prior knowledge about the camera or scene. The newly developed surface reconstruction technique employs a visibility constraint to iteratively refine and ultimately yield a triangulated surface that envelops the recovered scene structure and can produce views consistent with those of the original image sequence. Results on simulated data and synthetic and real imagery illustrate that the proposed scheme is robust, accurate, and has good numerical stability, even when features are repeatedly absent or their image locations are affected by extreme levels of noise. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fuzzy differential inclusions in atmospheric and medical cybernetics

    Publication Year: 2004 , Page(s): 877 - 887
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (712 KB) |  | HTML iconHTML  

    Uncertainty management in dynamical systems is receiving attention in artificial intelligence, particularly in the fields of qualitative and model based reasoning. Fuzzy dynamical systems occupy a very important position in the class of uncertain systems. It is well established that the fuzzy dynamical systems represented by a set of fuzzy differential inclusions (FDI) are very convenient tools for modeling and simulation of various uncertain systems. In this paper, we discuss about the mathematical modeling of two very complex natural phenomena by means of FDIs. One of them belongs to the atmospheric cybernetics (the term has been used in a broad sense) of the genesis of a cyclonic storm (cyclogenesis), and the other belongs to the bio-medical cybernetics of the evolution of tumor in a human body. Since a discussion of the former already appears in a previous paper by the first author, here, we present very briefly a theoretical formalism of cyclone formation. On the other hand, we treat the latter system more elaborately. We solve the FDIs with the help of an algorithm developed in this paper to numerically simulate the mathematical models. From the simulation results thus obtained, we have drawn a number of interesting conclusions, which have been verified, and this vindicates the validity of our models. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Relevant, irredundant feature selection and noisy example elimination

    Publication Year: 2004 , Page(s): 888 - 897
    Cited by:  Papers (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (312 KB) |  | HTML iconHTML  

    In many real-world situations, the method for computing the desired output from a set of inputs is unknown. One strategy for solving these types of problems is to learn the input-output functionality from examples in a training set. However, in many situations it is difficult to know what information is relevant to the task at hand. Subsequently, researchers have investigated ways to deal with the so-called problem of consistency of attributes, i.e., attributes that can distinguish examples from different classes. In this paper, we first prove that the notion of relevance of attributes is directly related to the consistency of attributes, and show how relevant, irredundant attributes can be selected. We then compare different relevant attribute selection algorithms, and show the superiority of algorithms that select irredundant attributes over those that select relevant attributes. We also show that searching for an "optimal" subset of attributes, which is considered to be the main purpose of attribute selection, is not the best way to improve the accuracy of classifiers. Employing sets of relevant, irredundant attributes improves classification accuracy in many more cases. Finally, we propose a new method for selecting relevant examples, which is based on filtering the so-called pattern frequency domain. By identifying examples that are nontypical in the determination of relevant, irredundant attributes, irrelevant examples can be eliminated prior to the learning process. Empirical results using artificial and real databases show the effectiveness of the proposed method in selecting relevant examples leading to improved performance even on greatly reduced training sets. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sparse modeling using orthogonal forward regression with PRESS statistic and regularization

    Publication Year: 2004 , Page(s): 898 - 911
    Cited by:  Papers (65)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (696 KB) |  | HTML iconHTML  

    The paper introduces an efficient construction algorithm for obtaining sparse linear-in-the-weights regression models based on an approach of directly optimizing model generalization capability. This is achieved by utilizing the delete-1 cross validation concept and the associated leave-one-out test error also known as the predicted residual sums of squares (PRESS) statistic, without resorting to any other validation data set for model evaluation in the model construction process. Computational efficiency is ensured using an orthogonal forward regression, but the algorithm incrementally minimizes the PRESS statistic instead of the usual sum of the squared training errors. A local regularization method can naturally be incorporated into the model selection procedure to further enforce model sparsity. The proposed algorithm is fully automatic, and the user is not required to specify any criterion to terminate the model construction procedure. Comparisons with some of the existing state-of-art modeling methods are given, and several examples are included to demonstrate the ability of the proposed algorithm to effectively construct sparse models that generalize well. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A generalized framework for interactive dynamic simulation for multirigid bodies

    Publication Year: 2004 , Page(s): 912 - 924
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (828 KB)  

    This paper presents a generalized framework for dynamic simulation realized in a prototype simulator called the Interactive Generalized Motion Simulator (I-GMS), which can simulate motions of multirigid-body systems with contact interaction in virtual environments. I-GMS is designed to meet two important goals: generality and interactivity. By generality, we mean a dynamic simulator which can easily support various systems of rigid bodies, ranging from a single free-flying rigid object to complex linkages such as those needed for robotic systems or human body simulation. To provide this generality, we have developed I-GMS in an object-oriented framework. The user interactivity is supported through a haptic interface for articulated bodies, introducing interactive dynamic simulation schemes. This user-interaction is achieved by performing push and pull operations via the PHANToM haptic device, which runs as an integrated part of I-GMS. Also, a hybrid scheme was used for simulating internal contacts (between bodies in the multirigid-body system) in the presence of friction, which could avoid the nonexistent solution problem often faced when solving contact problems with Coulomb friction. In our hybrid scheme, two impulse-based methods are exploited so that different methods are applied adaptively, depending on whether the current contact situation is characterized as "bouncing" or "steady." We demonstrate the user-interaction capability of I-GMS through online editing of trajectories of a 6-degree of freedom (dof) articulated structure. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • MAGMA: a multiagent architecture for metaheuristics

    Publication Year: 2004 , Page(s): 925 - 941
    Cited by:  Papers (14)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (584 KB) |  | HTML iconHTML  

    In this work, we introduce a multiagent architecture called the MultiAGent Metaheuristic Architecture (MAGMA) conceived as a conceptual and practical framework for metaheuristic algorithms. Metaheuristics can be seen as the result of the interaction among different kinds of agents: The basic architecture contains three levels, each hosting one or more agents. Level-0 agents build solutions, level-1 agents improve solutions, and level-2 agents provide the high level strategy. In this framework, classical metaheuristic algorithms can be smoothly accommodated and extended. The basic three level architecture can be enhanced with the introduction of a fourth level of agents (level-3 agents) coordinating lower level agents. With this additional level, MAGMA can also describe, in a uniform way, cooperative search and, in general, any combination of metaheuristics. We describe the entire architecture, the structure of agents in each level in terms of tuples, and the structure of their coordination as a labeled transition system. We propose this perspective with the aim to achieve a better and clearer understanding of metaheuristics, obtain hybrid algorithms, suggest guidelines for a software engineering-oriented implementation and for didactic purposes. Some specializations of the general architecture will be provided in order to show that existing metaheuristics [e.g., greedy randomized adaptive procedure (GRASP), ant colony optimization (ACO), iterated local search (ILS), memetic algorithms (MAs)] can be easily described in our framework. We describe cooperative search and large neighborhood search (LNS) in the proposed framework exploiting level-3 agents. We show also that a simple hybrid algorithm, called guided restart ILS, can be easily conceived as a combination of existing components in our framework. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • State observer based robust adaptive fuzzy controller for nonlinear uncertain and perturbed systems

    Publication Year: 2004 , Page(s): 942 - 950
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (304 KB) |  | HTML iconHTML  

    A robust adaptive fuzzy controller, based on a state observer, for a nonlinear uncertain and perturbed system is presented. The state observer is introduced to resolve the problem of the unavailability of the state variables. Two control signals are added to a basic state feedback control law, deduced from a nominal model, to guarantee the tracking performance in the presence of structural uncertainties and external disturbances. The first control signal is computed from an adaptive fuzzy system and eliminates the effect of structural uncertainties and estimation errors. Updating the adjustable parameters is ensured by a PID law to obtain a fast convergence. Robustness of the closed-loop system is guaranteed by an H∞ supervisor computed from a Riccati type equation. Simulation example is presented to show the efficiency of the proposed method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A hybrid neural network model for noisy data regression

    Publication Year: 2004 , Page(s): 951 - 960
    Cited by:  Papers (21)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (400 KB) |  | HTML iconHTML  

    A hybrid neural network model, based on the fusion of fuzzy adaptive resonance theory (FA ART) and the general regression neural network (GRNN), is proposed in this paper. Both FA and the GRNN are incremental learning systems and are very fast in network training. The proposed hybrid model, denoted as GRNNFA, is able to retain these advantages and, at the same time, to reduce the computational requirements in calculating and storing information of the kernels. A clustering version of the GRNN is designed with data compression by FA for noise removal. An adaptive gradient-based kernel width optimization algorithm has also been devised. Convergence of the gradient descent algorithm can be accelerated by the geometric incremental growth of the updating factor. A series of experiments with four benchmark datasets have been conducted to assess and compare effectiveness of GRNNFA with other approaches. The GRNNFA model is also employed in a novel application task for predicting the evacuation time of patrons at typical karaoke centers in Hong Kong in the event of fire. The results positively demonstrate the applicability of GRNNFA in noisy data regression problems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Geometry of Dempster's rule of combination

    Publication Year: 2004 , Page(s): 961 - 977
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (752 KB) |  | HTML iconHTML  

    In this paper, we analyze Shafer's belief functions (BFs) as geometric entities, focusing in particular on the geometric behavior of Dempster's rule of combination in the belief space, i.e., the set SΘ of all the admissible BFs defined over a given finite domain Θ. The study of the orthogonal sums of affine subspaces allows us to unveil a convex decomposition of Dempster's rule of combination in terms of Bayes' rule of conditioning and prove that under specific conditions orthogonal sum and affine closure commute. A direct consequence of these results is the simplicial shape of the conditional subspaces , i.e., the sets of all the possible combinations of a given BF s. We show how Dempster's rule exhibits a rather elegant behavior when applied to BFs assigning the same mass to a fixed subset (constant mass loci). The resulting affine spaces have a common intersection that is characteristic of the conditional subspace, called focus. The affine geometry of these foci eventually suggests an interesting geometric construction of the orthogonal sum of two BFs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Formulation of radiometric feasibility measures for feature selection and planning in visual servoing

    Publication Year: 2004 , Page(s): 978 - 987
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4664 KB) |  | HTML iconHTML  

    Feature selection and planning are integral parts of visual servoing systems. Because many irrelevant and nonreliable image features usually exist, higher accuracy and robustness can be expected by selecting and planning good features. Assumption of perfect radiometric conditions is common in visual servoing . This paper discusses the issue of radiometric constraints for feature selection in the context of visual servoing. In this paper radiometric constraints are presented and measures are formulated to select the optimal features (in a radiometric sense) from a set of candidate features. Simulation and experimental results verify the effectiveness of the proposed measures. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An adaptive high-order neural tree for pattern recognition

    Publication Year: 2004 , Page(s): 988 - 996
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (781 KB) |  | HTML iconHTML  

    A new neural tree model, called adaptive high-order neural tree (AHNT), is proposed for classifying large sets of multidimensional patterns. The AHNT is built by recursively dividing the training set into subsets and by assigning each subset to a different child node. Each node is composed of a high-order perceptron (HOP) whose order is automatically tuned taking into account the complexity of the pattern set reaching that node. First-order nodes divide the input space with hyperplanes, while HOPs divide the input space arbitrarily, but at the expense of increased complexity. Experimental results demonstrate that the AHNT generalizes better than trees with homogeneous nodes, produces small trees and avoids the use of complex comparative statistical tests and/or a priori selection of large parameter sets. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A hybrid of genetic algorithm and particle swarm optimization for recurrent network design

    Publication Year: 2004 , Page(s): 997 - 1006
    Cited by:  Papers (205)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (472 KB)  

    An evolutionary recurrent network which automates the design of recurrent neural/fuzzy networks using a new evolutionary learning algorithm is proposed in this paper. This new evolutionary learning algorithm is based on a hybrid of genetic algorithm (GA) and particle swarm optimization (PSO), and is thus called HGAPSO. In HGAPSO, individuals in a new generation are created, not only by crossover and mutation operation as in GA, but also by PSO. The concept of elite strategy is adopted in HGAPSO, where the upper-half of the best-performing individuals in a population are regarded as elites. However, instead of being reproduced directly to the next generation, these elites are first enhanced. The group constituted by the elites is regarded as a swarm, and each elite corresponds to a particle within it. In this regard, the elites are enhanced by PSO, an operation which mimics the maturing phenomenon in nature. These enhanced elites constitute half of the population in the new generation, whereas the other half is generated by performing crossover and mutation operation on these enhanced elites. HGAPSO is applied to recurrent neural/fuzzy network design as follows. For recurrent neural network, a fully connected recurrent neural network is designed and applied to a temporal sequence production problem. For recurrent fuzzy network design, a Takagi-Sugeno-Kang-type recurrent fuzzy network is designed and applied to dynamic plant control. The performance of HGAPSO is compared to both GA and PSO in these recurrent networks design problems, demonstrating its superiority. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Localization-based sensor validation using the Kullback-Leibler divergence

    Publication Year: 2004 , Page(s): 1007 - 1016
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (912 KB) |  | HTML iconHTML  

    A sensor validation criteria based on the sensor's object localization accuracy is proposed. Assuming that the true probability distribution of an object or event in space f(x) is known and a spatial likelihood function (SLF) ψ(x) for the same object or event in space is obtained from a sensor, then the expected value of the SLF E[ψ(x)] is proposed as a suitable validity metric for the sensor, where the expectation is performed over the distribution f(x). It is shown that for the class of increasing linear log likelihood SLFs, the proposed validity metric is equivalent to the Kullback-Leibler distance between f(x) and the unknown sensor-based distribution g(x) where the SLF ψ(x) is an observable increasing function of the unobservable g(x). The proposed technique is illustrated through several simulated and experimental examples. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • FINs: lattice theoretic tools for improving prediction of sugar production from populations of measurements

    Publication Year: 2004 , Page(s): 1017 - 1030
    Cited by:  Papers (14)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (584 KB) |  | HTML iconHTML  

    This paper presents novel mathematical tools developed during a study of an industrial-yield prediction problem. The set F of fuzzy interval numbers, or FINs for short, is studied in the framework of lattice theory. A FIN is defined as a mapping to a metric lattice of generalized intervals, moreover it is shown analytically that the set F of FINs is a metric lattice. A FIN can be interpreted as a convex fuzzy set, moreover a statistical interpretation is proposed here. Algorithm CALFIN is presented for constructing a FIN from a population of samples. An underlying positive valuation function implies both a metric distance and an inclusion measure function in the set F of FINs. Substantial advantages, both theoretical and practical, are shown. Several examples illustrate geometrically on the plane both the utility and the effectiveness of novel tools. It is outlined comparatively how some of the proposed tools have been employed for improving prediction of sugar production from populations of measurements for Hellenic Sugar Industry, Greece. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design of accurate classifiers with a compact fuzzy-rule base using an evolutionary scatter partition of feature space

    Publication Year: 2004 , Page(s): 1031 - 1044
    Cited by:  Papers (24)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (568 KB) |  | HTML iconHTML  

    An evolutionary approach to designing accurate classifiers with a compact fuzzy-rule base using a scatter partition of feature space is proposed, in which all the elements of the fuzzy classifier design problem have been moved in parameters of a complex optimization problem. An intelligent genetic algorithm (IGA) is used to effectively solve the design problem of fuzzy classifiers with many tuning parameters. The merits of the proposed method are threefold: 1) the proposed method has high search ability to efficiently find fuzzy rule-based systems with high fitness values, 2) obtained fuzzy rules have high interpretability, and 3) obtained compact classifiers have high classification accuracy on unseen test patterns. The sensitivity of control parameters of the proposed method is empirically analyzed to show the robustness of the IGA-based method. The performance comparison and statistical analysis of experimental results using ten-fold cross validation show that the IGA-based method without heuristics is efficient in designing accurate and compact fuzzy classifiers using 11 well-known data sets with numerical attribute values. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fuzzy branching temporal logic

    Publication Year: 2004 , Page(s): 1045 - 1055
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (400 KB) |  | HTML iconHTML  

    Intelligent systems require a systematic way to represent and handle temporal information containing uncertainty. In particular, a logical framework is needed that can represent uncertain temporal information and its relationships with logical formulae. Fuzzy linear temporal logic (FLTL), a generalization of propositional linear temporal logic (PLTL) with fuzzy temporal events and fuzzy temporal states defined on a linear time model, was previously proposed for this purpose. However, many systems are best represented by branching time models in which each state can have more than one possible future path. In this paper, fuzzy branching temporal logic (FBTL) is proposed to address this problem. FBTL adopts and generalizes concurrent tree logic (CTL*), which is a classical branching temporal logic. The temporal model of FBTL is capable of representing fuzzy temporal events and fuzzy temporal states, and the order relation among them is represented as a directed graph. The utility of FBTL is demonstrated using a fuzzy job shop scheduling problem as an example. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Joint segmentation and classification of time series using class-specific features

    Publication Year: 2004 , Page(s): 1056 - 1067
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (656 KB) |  | HTML iconHTML  

    We present an approach for the joint segmentation and classification of a time series. The segmentation is on the basis of a menu of possible statistical models: each of these must be describable in terms of a sufficient statistic, but there is no need for these sufficient statistics to be the same, and these can be as complex (for example, cepstral features or autoregressive coefficients) as fits. All that is needed is the probability density function (PDF) of each sufficient statistic under its own assumed model-presumably this comes from training data, and it is particularly appealing that there is no need at all for a joint statistical characterization of all the statistics. There is similarly no need for an a-priori specification of the number of sections, as the approach uses an appropriate penalization of an over-zealous segmentation. The scheme has two stages. In stage one, rough segmentations are implemented sequentially using a piecewise generalized likelihood ratio (GLR); in the second stage, the results from the first stage (both forward and backward) are refined. The computational burden is remarkably small, approximately linear with the length of the time series, and the method is nicely accurate in terms both of discovered number of segments and of segmentation accuracy. A hybrid of the approach with one based on Gibbs sampling is also presented; this combination is somewhat slower but considerably more accurate. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design of a switching controller for nonlinear systems with unknown parameters based on a fuzzy logic approach

    Publication Year: 2004 , Page(s): 1068 - 1074
    Cited by:  Papers (15)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (256 KB) |  | HTML iconHTML  

    This paper deals with nonlinear plants subject to unknown parameters. A fuzzy model is first used to represent the plant. An equivalent switching plant model is then derived, which supports the design of a switching controller. It will be shown that the closed-loop system formed by the plant and the switching controller is a linear system. Hence, the system performance of the closed-loop system can be designed. An application example on controlling a two-inverted pendulum system on a cart is given to illustrate the design procedure of the proposed switching controller. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stabilization of nonlinear nonminimum phase systems: adaptive parallel approach using recurrent fuzzy neural network

    Publication Year: 2004 , Page(s): 1075 - 1088
    Cited by:  Papers (20)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (512 KB) |  | HTML iconHTML  

    In this paper, an adaptive parallel control architecture to stabilize a class of nonlinear systems which are nonminimum phase is proposed. For obtaining an on-line performance and self-tuning controller, the proposed control scheme contains recurrent fuzzy neural network (RFNN) identifier, nonfuzzy controller, and RFNN compensator. The nonfuzzy controller is designed for nominal system using the techniques of backstepping and feedback linearization, is the main part for stabilization. The RFNN compensator is used to compensate adaptively for the nonfuzzy controller, i.e., it acts like a fine tuner; and the RFNN identifier provides the system's sensitivity for tuning the controller parameters. Based on the Lyapunov approach, rigorous proofs are also presented to show the closed-loop stability of the proposed control architecture. With the aid of the RFNN compensators, the parallel controller can indeed improve system performance, reject disturbance, and enlarge the domain of attraction. Furthermore, computer simulations of several examples are given to illustrate the applicability and effectiveness of this proposed controller. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics focuses on cybernetics, including communication and control across humans, machines and organizations at the structural or neural level

 

This Transaction ceased production in 2012. The current retitled publication is IEEE Transactions on Cybernetics.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Dr. Eugene Santos, Jr.
Thayer School of Engineering
Dartmouth College