By Topic

Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on

Issue 2 • Date Apr 1997

Filter Results

Displaying Results 1 - 22 of 22
  • Model-free optimization of fuzzy rule-based systems using evolution strategies

    Publication Year: 1997 , Page(s): 270 - 277
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (264 KB)  

    In this paper the applicability of evolution strategies, a special kind of evolutionary algorithms, to the problem of parameter optimization in the development of fuzzy rule-based systems is demonstrated. For this aim we introduce a shell which supports the design of any kind of rule based systems employing fuzzy logic for the formalization of imprecise reasoning processes and which optimizes all numerical parameters. This method works model-free, we do not need to know implicit features of the optimizing system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The hybrid grey-based models for temperature prediction

    Publication Year: 1997 , Page(s): 284 - 292
    Cited by:  Papers (40)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (312 KB)  

    In this paper several grey-based models are applied to temperature prediction problems. Standard normal distribution, linear regression, and fuzzy techniques are respectively integrated into the grey model to enhance the embedded GM(1, 1), a single variable first order grey model, prediction capability. The original data are preprocessed by the statistical method of standard normal distribution such that they will become normally distributed with a mean of zero and a standard deviation of one. The normalized data are then used to construct the grey model. Due to the inherent error between the predicted and actual outputs, the grey model is further supplemented by either the linear regression or fuzzy method or both to improve the prediction accuracy. Results from predicting the monthly temperatures for two different cities demonstrate that each proposed hybrid methodology can somewhat reduce the prediction errors. When both the statistics and fuzzy methods are incorporated with the grey model, the prediction capability of the hybrid model is quite satisfactory. We repeat the prediction problems in neural networks and the results are also presented for comparison View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Computational capabilities of recurrent NARX neural networks

    Publication Year: 1997 , Page(s): 208 - 215
    Cited by:  Papers (54)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (268 KB)  

    Recently, fully connected recurrent neural networks have been proven to be computationally rich-at least as powerful as Turing machines. This work focuses on another network which is popular in control applications and has been found to be very effective at learning a variety of problems. These networks are based upon Nonlinear AutoRegressive models with eXogenous Inputs (NARX models), and are therefore called NARX networks. As opposed to other recurrent networks, NARX networks have a limited feedback which comes only from the output neuron rather than from hidden states. They are formalized by y(t)=Ψ(u(t-nu), ..., u(t-1), u(t), y(t-ny), ..., y(t-1)) where u(t) and y(t) represent input and output of the network at time t, nu and ny are the input and output order, and the function Ψ is the mapping performed by a Multilayer Perceptron. We constructively prove that the NARX networks with a finite number of parameters are computationally as strong as fully connected recurrent networks and thus Turing machines. We conclude that in theory one can use the NARX models, rather than conventional recurrent networks without any computational loss even though their feedback is limited. Furthermore, these results raise the issue of what amount of feedback or recurrence is necessary for any network to be Turing equivalent and what restrictions on feedback limit computational power View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A novel approach to feature selection based on analysis of class regions

    Publication Year: 1997 , Page(s): 196 - 207
    Cited by:  Papers (22)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (324 KB)  

    This paper presents a novel approach to feature selection based on analysis of class regions which are generated by a fuzzy classifier. A measure for feature evaluation is proposed and is defined as the exception ratio. The exception ratio represents the degree of overlaps in the class regions, in other words, the degree of having exceptions inside of fuzzy rules generated by the fuzzy classifier. It is shown that for a given set of features, a subset of features that has the lowest sum of the exception ratios has the tendency to contain the most relevant features, compared to the other subsets with the same number of features. An algorithm is then proposed that performs elimination of irrelevant features. Given a set of remaining features, the algorithm eliminates the next feature, the elimination of which minimizes the sum of the exception ratios. Next, a terminating criterion is given. Based on this criterion, the proposed algorithm terminates when a significant increase in the sum of the exception ratios occurs due to the next elimination. Experiments show that the proposed algorithm performs well in eliminating irrelevant features while constraining the increase in recognition error rates for unknown data of the classifiers in use View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Study and resolution of singularities for a 6-DOF PUMA manipulator

    Publication Year: 1997 , Page(s): 332 - 343
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (348 KB)  

    Upon solving the inverse kinematics problem of robot manipulators, the inherent singularity problem should always be considered. When a manipulator is approaching a singular configuration, a certain degree of freedom will be lost such that there are no feasible solutions of the manipulator to move into this singular direction. In this paper, the singularities of a 6-DOF PUMA manipulator are analyzed in detail and all the corresponding singular directions in task space are clearly identified. In order to resolve this singularity problem, an approach denoted Singularity Isolation Plus Compact QP (SICQP) method is proposed. The SICQP method decomposes the work space into achievable and unachievable (i.e., singular) directions. Then, the exactness in the singular directions are released such that extra redundancy is provided to the achievable directions. Finally, the Compact QP method is applied to maintain the exactness in the achievable directions, and to minimize the tracking errors in the singular directions under the condition that feasible joint solutions must be obtained. In the end, some simulation results for PUMA manipulator are given to demonstrate the effectiveness of the SICQP method View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A comparative study on similarity-based fuzzy reasoning methods

    Publication Year: 1997 , Page(s): 216 - 227
    Cited by:  Papers (28)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (516 KB)  

    If the given fact for an antecedent in a fuzzy production rule (FPR) does not match exactly with the antecedent of the rule, the consequent can still be drawn by technique such as fuzzy reasoning. Many existing fuzzy reasoning methods are based on Zadeh's Compositional Rule of Inference (CRI) which requires setting up a fuzzy relation between the antecedent and the consequent part. There are some other fuzzy reasoning methods which do not use Zadeh's CRI. Among them, the similarity-based fuzzy reasoning methods, which make use of the degree of similarity between a given fact and the antecedent of the rule to draw the conclusion, are well known. In this paper, six similarity-based fuzzy reasoning methods are compared and analyzed. Two of them are newly proposed by the authors. The comparisons are two-fold. One is to compare the six reasoning methods in drawing appropriate conclusions for a given set of FPRs. The other is to compare them based on five issues: 1) types of FPR handled by these methods; 2) the complexity of the methods; 3) the accuracy of the conclusion drawn; 4) the accuracy of the similarity measure; and 5) the multi-level reasoning capability. The results have shed some lights on how to select an appropriate fuzzy reasoning method under different environments View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Petri net synthesis theory for modeling flexible manufacturing systems

    Publication Year: 1997 , Page(s): 169 - 183
    Cited by:  Papers (29)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (528 KB)  

    A theory that synthesizes Petri nets for modeling flexible manufacturing systems is presented. The theory adopts a bottom-up or modular-composition approach to construct net models. Each module is modeled as a resource control net (RCN), which represents a subsystem that controls a resource type in a flexible manufacturing system. Interactions among the modules are described as the common transition and transition subnets. The net obtained by merging the modules with two minimal restrictions is shown to be conservative and thus bounded. An algorithm is developed to detect two sufficient conditions for structural liveness of the net. The algorithm examines only the net's structure and the initial marking, and appears to be more efficient than state enumeration techniques such as the reachability tree method. In this paper, the sufficient conditions for liveness are shown to be related to some structural objects called siphons. To demonstrate the applicability of the theory, a flexible manufacturing system of a moderate size is modeled and analyzed using the proposed theory View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A complementary fuzzy logic system

    Publication Year: 1997 , Page(s): 293 - 295
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (92 KB)  

    This article discusses complementary (C) fuzzy logic system that is one of continuous multiary logic systems that satisfies a complementary law differently from usual fuzzy logic systems. This article includes formulation of the C fuzzy logic system, derivation of tautologies, and mentions an example that typically shows a difference in inference computation between the C fuzzy logic system and a usual fuzzy logic system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An efficient method for obtaining the general solution for the force balance equations with hard point contacts

    Publication Year: 1997 , Page(s): 255 - 260
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (188 KB)  

    A compact formulation which is more efficient than the traditional pseudoinverse formulation for obtaining the general solution for the force balance equations has been presented (Cheng and Orin, 1991). With hard point contacts considered, the force balance equations can be decomposed into two sets of rank 3, smaller linear equations if proper coordinate frames of the reference member and at the contact points are chosen. This decomposition, together with the compact formulation, can reduce the steps of the Gaussian elimination process, increase parallelism of the algorithm, and therefore, keep the computation time for obtaining the general solution for the force balance equations to a minimum View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Verification of computer users using keystroke dynamics

    Publication Year: 1997 , Page(s): 261 - 269
    Cited by:  Papers (55)  |  Patents (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (176 KB)  

    This paper presents techniques to verify the identity of computer users using the keystroke dynamics of computer user's login string as characteristic patterns using pattern recognition and neural network techniques. This work is a continuation of our previous work where only interkey times were used as features for identifying computer users. In this work we used the key hold times for classification and then compared the performance with the former interkey time-based technique. Then we use the combined interkey and hold times for the identification process. We applied several neural network and pattern recognition algorithms for verifying computer users as they type their password phrases. It was found that hold times are more effective than interkey times and the best identification performance was achieved by using both time measurements. An identification accuracy of 100% was achieved when the combined hold and intekey time-based approach were considered as features using the fuzzy ARTMAP, radial basis function networks (RBFN), and learning vector quantization (LVQ) neural network paradigms. Other neural network and classical pattern algorithms such as backpropagation with a sigmoid transfer function (BP, Sigm), hybrid sum-of-products (HSOP), sum-of-products (SOP), potential function and Bayes' rule algorithms gave moderate performance View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Behavioral Petri nets: a model for diagnostic knowledge representation and reasoning

    Publication Year: 1997 , Page(s): 184 - 195
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (536 KB)  

    Some of the most popular approaches to model-based diagnosis consist of reasoning about a model of the behaviour of the system to be diagnosed by considering a set of observations about such a system and by explaining it in terms of a set of initial causes. This process has been widely modeled via logical formalisms essentially taking into account declarative aspects. In this paper, a new approach is proposed, where the diagnostic process is captured within a framework based on the formalism of Petri nets. We introduce a particular net model, called Behavioral Petri Net (BPN), We show how the formalization of the diagnostic process can be obtained in terms of reachability in a BPN and can be implemented by exploiting classical analysis techniques of Petri nets like reachability graph analysis and P-invariant computation. Advantages of the proposed methods, like suitability to parallel processing and exploitation of linear algebra techniques, are then pointed out View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Predictive head movement tracking using a Kalman filter

    Publication Year: 1997 , Page(s): 326 - 331
    Cited by:  Papers (15)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (140 KB)  

    The use of head movements in control applications leaves the hands free for other tasks and utilizes the mobility of the head to acquire and track targets over a wide field of view. We present the results of applying a Kalman filter to generate prediction estimates for tracking head positions. A simple kinematics approach based on the assumption of a piecewise constant acceleration process is suggested and is shown to track head positions with an rms error under 2° for head movements with accelerations smaller than 3000°/s. To account for the wide range of head dynamic characteristics, an adaptive approach with input estimation is developed. The performance of the Kalman filter is compared to that based on a simple polynomial predictor View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling of flexible-link manipulators with prismatic joints

    Publication Year: 1997 , Page(s): 296 - 305
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (356 KB)  

    The axially translating flexible link in flexible manipulators with a prismatic joint can be modeled using the Euler-Bernoulli beam equation together with the convective terms. In general, the method of separation of variables cannot be applied to solve this partial differential equation. In this paper, we present a nondimensional form of the Euler-Bernoulli beam equation using the concept of group velocity and present conditions under which separation of variables and assumed modes method can be used. The use of clamped-mass boundary conditions lead to a time-dependent frequency equation for the translating flexible beam. We present a novel method to solve this time-dependent frequency equation by using a differential form of the frequency equation. We then present a systematic modeling procedure for spatial multi-link flexible manipulators having both revolute and prismatic joints. The assumed mode/Lagrangian formulation of dynamics is employed to derive closed form equations of motion. We show, using a model-based control law, that the closed-loop dynamic response of modal variables become unstable during retraction of a flexible link, compared to the stable dynamic response during extension of the link. Numerical simulation results are presented for a flexible spatial RRP configuration robot arm. We show that the numerical results compare favorably with those obtained by using a finite element-based model View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exploration of polygonal environments using range data

    Publication Year: 1997 , Page(s): 250 - 255
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (188 KB)  

    Several robotic problems involve the systematic traversal of the environment, commonly referred to as exploration. We present a strategy for the exploration of unknown finite polygonal environments, using a point robot with 1) no positional uncertainty and 2) an ideal range sensor that measures range in N uniformly distributed directions. The range data vector, obtained from the range sensor, corresponds to a sampled version of a visibility polygon. Visibility polygon edges that do not correspond to environmental edges are called jump edges and the exploration strategy is based on the fact that jump edges indicate directions of possibly unexplored environmental regions. We describe conditions under which it is possible to identify jump edges in the range data. We also show how the exploration strategy can be used in a solution to the terrain acquisition problem and describe conditions under which a solution is guaranteed within a finite number of measurements View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A learning process of the matching identification problem

    Publication Year: 1997 , Page(s): 228 - 238
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (312 KB)  

    Recently, Mehrez and Steinberg (1995) described and studied the matching identification problem (MIP). The MIP is a form of knowledge acquisition problem from the field of artificial intelligence. For instance, an expert system infers knowledge from a set of examples. But how do you most quickly acquire the examples that knowledge is inferred from? The MIP is a special case of this problem. Although an optimal algorithm was not found by Mehrez and Steinberg, they described two general types of heuristics. We describe in this paper an optimal algorithm for the case of K=2, and an improved heuristic for general K, which identifies a chosen subset with 6% fewer inquiries on average when N=15, K=3. The heuristic improves relative to the Type I heuristic as N increases, K held constant. The improved heuristic is concerned with the symbols yet unclassified as being in the chosen subset or not in the chosen subset. By inquiring subsets with all unclassified symbols, we most quickly “span” the set of unclassified numbers. Closed form equations are developed for the expected number of inquiries required and the variance of the number of inquiries required for the optimal algorithm. Computational studies are provided for Mehrez and Steinberg's Type I heuristics, the K=2 optimal algorithm, and the spanning heuristic View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Time-optimal trajectories for cooperative multi-manipulator systems

    Publication Year: 1997 , Page(s): 343 - 353
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (388 KB)  

    We present two schemes for planning the time-optimal trajectory for cooperative multi-manipulator system (CMMS) carrying a common object. We assume that the desired path is given and parameterizable by an arclength variable. Both approaches take into account the dynamics of the manipulators and object. The first approach employs linear programming techniques, and it allows us to obtain the time-optimal execution of the given task utilizing the maximum torque capacities of the joint motors. The second approach is a sub-time-optimal method that is computationally very efficient. In the second approach the given load is divided into a share for each robot in the CMMS in a manner in which the trajectory acceleration/deceleration is maximized, hence the trajectory execution time is minimized. This load distribution approach uses optimization schemes that degenerate to a linear search algorithm for the case of two robots manipulating a common load, and this results in significant reduction of computation time. The load distribution scheme not only enables us to reduce the computation time, but also gives us the possibility of applying this method in real-time planning and control of CMMS. Further, we show that for certain object trajectories the load distribution scheme yields truly time-optimal trajectories View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comparative study of stochastic algorithms for system optimization based on gradient approximations

    Publication Year: 1997 , Page(s): 244 - 249
    Cited by:  Papers (53)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (212 KB)  

    Stochastic approximation (SA) algorithms can be used in system optimization problems for which only noisy measurements of the system are available and the gradient of the loss function is not. This type of problem can be found in adaptive control, neural network training, experimental design, stochastic optimization, and many other areas. This paper studies three types of SA algorithms in a multivariate Kiefer-Wolfowitz setting, which uses only noisy measurements of the loss function (i.e., no loss function gradient measurements). The algorithms considered are: the standard finite-difference SA (FDSA) and two accelerated algorithms, the random directions SA (RDSA) and the simultaneous-perturbation SA (SPSA). RDSA and SPSA use randomized gradient approximations based on (generally) far fewer function measurements than FDSA in each Iteration. This paper describes the asymptotic error distribution for a class of RDSA algorithms, and compares the RDSA, SPSA, and FDSA algorithms theoretically (using mean-square errors computed from asymptotic distributions) and numerically. Based on the theoretical and numerical results, SPSA is the preferable algorithm to use View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fuzzy system as parameter estimator of nonlinear dynamic functions

    Publication Year: 1997 , Page(s): 313 - 326
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (456 KB)  

    In this paper we use adaptive fuzzy systems as intelligent identification systems for nonlinear time-varying plants. A new technique to design the fuzzy system that relies on the minimization of a loss function is presented. The design technique uses the centers of the fuzzy sets (labels) at the antecedent part of the rule base as the estimated parameters. This parametrization has the Linear In The Parameters (LITPs) characteristic that allows standard parameter estimation technique to be used to estimate the parameters of the fuzzy system. The combination of the fuzzy system and the estimation method then performs as a nonlinear estimator. If several fuzzy sets are defined for the input variables at the antecedent part, the fuzzy system (“fuzzy estimator”) then behaves as a collection of nonlinear estimators where different rules' regions have different parameters. The proposed scheme is potentially capable of estimating the parameters of highly nonlinear plants. Simulation examples, which use plants with highly nonlinear gain, show the power of the proposed estimation scheme in comparison to estimation using the linear model View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fuzzy variable structure control

    Publication Year: 1997 , Page(s): 306 - 312
    Cited by:  Papers (37)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (248 KB)  

    A new methodology is presented to improve the design and tuning of a fuzzy logic controller (FLC) using variable structure control (VSC) theory. A VSC-type rule base is constructed and the fundamentals of FLC explored quantitatively by VSC theory. A very concise mathematical expression for the FLC is presented, in which the Lyapunov stability criterion can be applied to guide the design and tuning. This results in a simpler and more systematic procedure. Application of the method to higher order systems is made straight forward by applying a hierarchical technique. The validity of the design methodology is demonstrated by simulation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A neural network model of memory under stress

    Publication Year: 1997 , Page(s): 278 - 284
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (240 KB)  

    A model that attempts to simulate animal memory under stress is presented. For this purpose a model of selectable multiple associative memories is given. We consider two underlying types of memories: stressed and unstressed, implemented on the same neural network. In our model, learning into one or the other type of memory is done according to the stress of the individual at the time of learning. Memory retrieval is obtained according to a continuous function of the stress of the individual at the time of retrieval, who for low stress retrieves unstressed associations and for high stress retrieves stressed associations. Several biological results supporting this model are presented. A mathematical proof on the behaviour of the basins of attraction of the network as a function of stress is presented. Also a generalization to selectable multiple coexisting memories is given, and engineering and other applications of the model are suggested View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • String taxonomy using learning automata

    Publication Year: 1997 , Page(s): 354 - 365
    Cited by:  Papers (7)  |  Patents (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (440 KB)  

    A typical syntactic pattern recognition (PR) problem involves comparing a noisy string with every element of a dictionary, X. The problem of classification can be greatly simplified if the dictionary is partitioned into a set of subdictionaries. In this case, the classification can be hierarchical-the noisy string is first compared to a representative element of each subdictionary and the closest match within the subdictionary is subsequently located. Indeed, the entire problem of subdividing a set of string into subsets where each subset contains “similar” strings has been referred to as the “String Taxonomy Problem”. To our knowledge there is no reported solution to this problem. In this paper we present a learning-automaton based solution to string taxonomy. The solution utilizes the Object Migrating Automaton the power of which in clustering objects and images has been reported. The power of the scheme for string taxonomy has been demonstrated using random string and garbled versions of string representations of fragments of macromolecules View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Entropy-based reliability analysis for intelligent machines

    Publication Year: 1997 , Page(s): 239 - 244
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (172 KB)  

    A new metric for performance assessment of intelligent machines has been developed. The method fuses concepts from the Theory of Intelligent Machines proposed by Saridis (1988) with traditional reliability analysis in the development of a measure which reflects both the uncertainty inherent in the intelligent machine and the uncertainty allowed by the task description. The metric is entropy based, and is shown to be analogous to a measure of system reliability View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics focuses on cybernetics, including communication and control across humans, machines and organizations at the structural or neural level

 

This Transaction ceased production in 2012. The current retitled publication is IEEE Transactions on Cybernetics.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Dr. Eugene Santos, Jr.
Thayer School of Engineering
Dartmouth College