By Topic

Systems, Man and Cybernetics, IEEE Transactions on

Issue 2 • Date Feb 1994

Filter Results

Displaying Results 1 - 18 of 18
  • A computational structure for preattentive perceptual organization: graphical enumeration and voting methods

    Page(s): 246 - 267
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1700 KB)  

    Presents an efficient computational structure for preattentive perceptual organization. By perceptual organization the authors refer to the ability of a vision system to organize features detected in images based on viewpoint consistency and other Gestaltic perceptual phenomena. This usually has two components, a primarily bottom up preattentive part and a top down attentive part, with meaningful features emerging in a synergistic fashion from the original set of (very) primitive features. In this work the authors advance a computational structure for preattentive perceptual organization. The authors propose a hierarchical approach, using voting methods to build associations through consensus and relational graphs to represent the organization at each level. The voting method is very efficient in terms of time and space and performs impressively for a wide range of organizations. The graphical representation allows the ready extraction of higher order features, or perceptual tokens, because the relational information is rendered explicit View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The effect of bandwidth on telerobot system performance

    Page(s): 342 - 348
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (660 KB)  

    The effect of the joint bandwidth of the slave arm on telerobot system performance was investigated experimentally. Three bandwidth values, 0.5, 1.0, and 2.0 Hz, were used to perform peg-in-hole insertion and removal tests. The system performance was assessed by measuring the task completion time and the sum-of-squares of the contact forces and moments applied on the peg. The experimental results indicate significant performance improvement when the bandwidth was increased from 0.5 to 1.0 Hz. No change in performance is shown, however, between the 1.0 and 2.0 Hz cases View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Thinning of gray-scale images with combined sequential and parallel conditions for pixel removal

    Page(s): 294 - 299
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (492 KB)  

    A thinning algorithm is proposed for gray-scale images, which applies a combination of sequential and parallel conditions for pixel removal. The authors found that the algorithm developed by Salari and Siy (1984) produces unsuitable results in certain cases, due to its pure sequential processing. The proposed algorithm in this paper is an improvement of theirs and, at the same time, is a gray-image equivalent of Hilditch's thinning algorithm. Two versions are available for deriving either 4-connected or 1-connected core lines View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Point pattern representation using imprecise, incomplete, nonmetric information

    Page(s): 222 - 233
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1024 KB)  

    A novel method is described for representing two-or three-dimensional patterns of n points utilizing imprecise, incomplete, nonmetric information. This information consists solely of a rank ordered list of interpoint distances determined from pairwise comparisons. Ideally each comparison should determine a longer and shorter distance, and a set of comparisons should include all possible pairs. Actual representation information is likely to be imprecise and incomplete. Methods are presented for maximizing the information obtained from imprecise, incomplete sets of comparisons through inferencing procedures. The sufficiency of the resulting information for precise pattern representation is demonstrated through its use in the reconstruction of the patterns using multidimensional scaling (MDS). Some surprising results are presented on the possible advantages of imprecision from the viewpoint of data requirements. A short appendix links the inferencing procedures developed in this paper to the mathematical concept of a semi-order View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A knowledge based system using multiple expert modules for monitoring leprosy-an endemic disease

    Page(s): 173 - 186
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1224 KB)  

    An environment with multiple expert modules is essential for proper handling of diagnosis and monitoring of chronic endemic diseases. In this paper, we present LEPDIAG-a knowledge based system for diagnosis and monitoring of leprosy. The proposed architecture is a conglomeration of three expert modules and a procedural performance evaluator. A novel feature of the architecture is inclusion of the homeostatic expert module which models the immunological reaction of the patient. The entire system provides a closed loop diagnosis and follow-up environment. LEPDIAG is built around the fuzzy expert system building tool FRUIT for dealing with imprecise knowledge. The domain knowledge in LEPDIAG is expressed by fuzzy production rules. The rules have been partitioned using suitable clustering criteria. The rule conflict is resolved using metarules. The information objects used and the fuzzy inference strategy adopted have been illustrated View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Group technology and cellular manufacturing

    Page(s): 203 - 215
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1260 KB)  

    A number of survey papers on group technology and cellular manufacturing system design have been published. Many of them focus primarily on clustering techniques that manipulate rows and columns of the part-machine processing indicator matrix to form a block diagonal structure. Since the last survey paper was published, there have been some developments in cellular manufacturing system design. A number of papers that consider practical design constraints while designing cellular manufacturing systems have been published. The purpose of this paper is to provide a thorough survey of papers on group technology and cellular manufacturing system design. Its purpose is also to state some important design factors that cannot be ignored View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • SEATER: an object-oriented simulation environment using learning automata for telephone traffic routing

    Page(s): 349 - 356
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (776 KB)  

    Presents SEATER-an object-oriented environment in which any general telephone traffic routing problem can be set up, tested, and simulated using a variety of routing methods. The routing methods available are the fixed rule, random routing, and routing utilizing a complete assortment of different learning automata. The paper first describes the general telephone traffic routing problem, and various existing fixed rule routing schemes supported by the system are explained. This is followed by a brief motivation and description of learning automata routing techniques, and a survey of those schemes which are supported by the prototype system implemented. The paper then highlights the considerations that were taken in the design and implementation of the object-oriented prototype, SEATER. These automata schemes supported by SEATER are compared to the existing fixed rule algorithms in terms of minimizing the blocking probability of the network. The simulations show that the former solutions are far superior to any fixed rule solutions. The advantage of the former lies in their adaptability to changes in telephone traffic. The system is written in SMALLTALK V and runs on a MAC II View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient dynamic simulation of multiple manipulator systems with singular configurations

    Page(s): 306 - 313
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (888 KB)  

    The paper presents an efficient algorithm for the simulation of a system of m manipulators each having N degrees of freedom that are grasping a common object. Algorithms for such a system have been previously developed by others. In Lilly and Orin (1989), an O(mN) algorithm is presented that does not fully consider the case when one or more of the manipulators are in singular configurations. However, it is stated in Rodriguez, Jain, and Kreutz-Delgado (1989) that the algorithm has an O(mN)+O(m3) computational complexity when one or more of the chains are singular. This results because the size of the system of equations to be solved grows linearly with the number of chains in the system. The algorithm presented in this paper significantly reduces the size of the system of equations to be solved to one that grows linearly with the number of singular chains, s, and achieves an O(mN)+O(s3) complexity. In addition to this result, efficient O(mN) algorithms are also presented for special cases where only one or two chains are in singular configurations. These are particularly useful because it is common to deal with systems consisting of only a few manipulators grasping a common object, and even with more manipulators, it is unlikely that many of them will be singular simultaneously. Finally, by applying the algorithm developed for the case of two singularities to a dual-arm system, an algorithm results that requires fewer computations than that of existing methods, and has the added benefit of being robust in the presence of singular manipulators View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Epistemic decision theory applied to multiple-target tracking

    Page(s): 234 - 245
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1048 KB)  

    A decision philosophy that seeks the avoidance of error by trading off belief of truth and value of information is applied to the problem of recognizing tracks from multiple targets (MTT). A successful MTT methodology should be robust in that its performance degrades gracefully as the conditions of the collection become less favorable to optimal operation. By stressing the avoidance, rather than the explicit minimization, of error, the authors obtain a decision rule for trajectory-data association that does not require the resolution of all conflicting hypotheses when the database does not contain sufficient information to do so reliably. This rule, coupled with a set-valued Kalman filter for trajectory estimation, results in a methodology that does not attempt to extract more information from the database than it contains View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Assessment and management of software technical risk

    Page(s): 187 - 202
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1700 KB)  

    This paper addresses the modeling and management of software technical risk from the life-cycle perspectives of software development. It provides an overview of the emergence of software as a powerful instrument, an insight into the evolution of subspecializations in engineering, and an outline of the conceptual framework for the modeling and management of software technical risk. It establishes the foundations upon which the conceptual framework is developed. Basic concepts in software risk assessment are introduced, focusing on technical risk and on its distinction from nontechnical risk. The shift of importance from hardware to software and its profound implications on software technical risk management are discussed. The quintessential consequence of this shift, in which hardware assumes the component implementation role and software assumes the systems implementation role, is its total influence on the understanding and the assessment of software technical risk. The challenges and opportunities facing the professional community in the communication of technical risk are considered. The conceptual framework for the modeling and management of technical risk is then developed. Major forces and traits are identified. A holistic framework based on hierarchical holographic modeling is adopted. The assessment and management of risk should ultimately enable any organization involved in software development to meet its product quality and performance goals while controlling costs and schedule View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • N-learners problem: fusion of concepts

    Page(s): 319 - 327
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (876 KB)  

    Given N learners each capable of learning concepts (subsets) in the sense of Valiant (1985), we are interested in combining them using a single fuser. We consider two cases. In open fusion the fuser is given the sample and the hypotheses of the individual learners; we show that a fusion rule can be obtained by formulating this problem as another learning problem. We show sufficiency conditions that ensure the composite system to be better than the best of the individual. Second, in closed fusion the fuser does not have an access to either the training sample or the hypotheses of the individual learners. By using a linear threshold fusion function (of the outputs of individual learners) we show that the composite system can be made better than the best of the statistically independent learners View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reinforcement learning for the adaptive control of nonlinear systems

    Page(s): 357 - 363
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (556 KB)  

    The adaptive control of nonlinear systems is a nontrivial problem. Examples of this class of problems are found widely in many areas of control applications. While techniques for the adaptive control of linear systems have been well-established in the literature, there are few corresponding techniques for nonlinear systems. In this work an attempt is made to present a method for the adaptive control of nonlinear systems based on a feedfoward neural network. The proposed approach incorporates a neuro-controller used within a reinforcement learning framework, which reduces the problem to one of learning a stochastic approximation of an unknown average error surface. Emphasis is placed on the fact that the neuro-controller dose not need any input/output information about the controlled system. The proposed method promises to be an efficient tool for adaptive control for both static and dynamic nonlinear systems. Several examples are included to illustrate the proposed scheme View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Finding knight's tours on an M×N chessboard with O(MN) hysteresis McCulloch-Pitts neurons

    Page(s): 300 - 306
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (688 KB)  

    How can a knight be moved on a chessboard so that the knight visits each square once and only once and goes back to the starting square? The earliest serious attempt to find a knight's tour on the chessboard was made by L. Euler in 1759 [1]. In this correspondence, a parallel algorithm based on the hysteresis McCulloch-Pitts neurons is proposed to solve the knight's tour problem. The relation between the traveling salesman problem and the knight's tour problem is also discussed. A large number of simulation runs were performed to investigate the behavior of the hysteresis McCulloch-Pitts neural model. The purpose of this correspondence is to present a case study-how to successfully represent the combinatorial optimization problems by means of neural network View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Classification using set-valued Kalman filtering and Levi's decision theory

    Page(s): 313 - 319
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (676 KB)  

    We consider the problem of using Levi's expected epistemic decision theory for classification when the hypotheses are of different informational values, conditioned on convex sets obtained from a set-valued Kalman filter. The background of epistemic utility decision theory with convex probabilities is outlined and a brief introduction to set-valued estimation is given. The decision theory is applied to a classifier in a multiple-target tracking scenario. A new probability density, appropriate for classification using the ratio of intensities, is introduced View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Systems engineering management: a framework for the development of a multidisciplinary discipline

    Page(s): 327 - 332
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (640 KB)  

    Systems engineering is a multidisciplinary function dedicated to controlling design so that all elements are integrated to provide an optimum, overall system-as contrasted with the integration of optimized sub-elements. A systems engineer is a person who is capable of integrating knowledge from different disciplines and seeing problems with a “holistic view” by applying the “systems approach.” Since no complex system is created by a single person, systems engineering is strongly linked to management. The question addressed in this paper is how can knowledge and skills in systems engineering management be developed through a formal training program. We describe a multidisciplinary framework for curricula planning in systems engineering and suggest that any formal program for such training should consist of the following five chapters: (1) basic studies, (2) disciplinary studies, (3) specific systems, (4) systems engineering concepts and tools, and, (5) management studies. We also advise that any multidisciplinary program of this nature should be established as a cooperative effort of an engineering school and a management school View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A bibliography of heuristic search research through 1992

    Page(s): 268 - 293
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2876 KB)  

    Presents a categorized bibliography of heuristic search research materials gathered largely from the artificial intelligence (AI) literature. The intent is to provide an introduction to this information for researchers and practitioners in operations research View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sensitivity of a Bayesian analysis to the prior distribution

    Page(s): 216 - 221
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (528 KB)  

    Consider the problem of eliciting and specifying a prior probability distribution for a Bayesian analysis. There will generally be some uncertainty in the choice of prior, especially when there is little information from which to construct such a distribution, or when there are several priors elicited, say, from different experts. It is of interest, then, to characterize the sensitivity of a posterior distribution (or posterior mean) to prior. We characterize this sensitivity in terms of bounds on the difference between posterior distributions corresponding to different priors. Further, we illustrate the results on two distinct problems: a) determining least-informative (vague) priors and b) estimating statistical quantiles for a problem in analyzing projectile accuracy View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interpolation, completion, and learning fuzzy rules

    Page(s): 332 - 342
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1060 KB)  

    Fuzzy inference systems and neural networks both provide mathematical systems for approximating continuous real-valued functions. Historically, fuzzy rule bases have been constructed by knowledge acquisition from experts while the weights on neural nets have been learned from data. This paper examines algorithms for constructing fuzzy rules from input-output training data. The antecedents of the rules are determined by a fuzzy decomposition of the input domains. The decomposition localizes the learning process, restricting the influence of each training example to a single rule. Fuzzy learning proceeds by determining entries in a fuzzy associative memory using the degree to which the training data matches the rule antecedents. After the training set has been processed, similarity to existing rules and interpolation are used to complete the rule base. Unlike the neural network algorithms, fuzzy learning algorithms require only a single pass through the training set. This produces a computationally efficient method of learning. The effectiveness of the fuzzy learning algorithms is compared with that of a feedforward neural network trained with back-propagation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.