By Topic

Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on

Issue 5 • Date Sept. 2006

Filter Results

Displaying Results 1 - 21 of 21
  • Table of contents

    Publication Year: 2006 , Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (40 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Systems, Man, and Cybernetics—Part A: Systems and Humans publication information

    Publication Year: 2006 , Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (35 KB)  
    Freely Available from IEEE
  • Optimal zoning design by genetic algorithms

    Publication Year: 2006 , Page(s): 833 - 846
    Cited by:  Papers (20)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1175 KB) |  | HTML iconHTML  

    In pattern recognition, zoning is one of the most effective methods for extracting distinctive characteristics from patterns. So far, many zoning methods have been proposed, based on standard partitioning criteria of the pattern image. In this paper, a new technique is presented for zoning design. Zoning is considered as the result of an optimization problem and a genetic algorithm is used to find the optimal zoning that minimizes the value of the cost function associated to the classification. For this purpose, a new description of zonings by Voronoi diagrams is used, which is found to be well suited for the genetic technique. The experimental tests, carried out in the field of handwritten numeral and character recognition, show that the proposed technique leads to zonings superior to traditional zoning methods View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Agent-based modeling of supply chains for distributed scheduling

    Publication Year: 2006 , Page(s): 847 - 861
    Cited by:  Papers (18)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (369 KB)  

    This paper considers a supply chain that comprises multiple independent and autonomous enterprises (project managers) that seek and select various contractors to complete operations of their project. Both the project managers and contractors jointly determine the schedules of their operations while no single enterprise has complete information of other enterprises. The centralized scheduling approach that can usually obtain good global performance but must share nearly complete information that is difficult or even impractical due to the distributed nature of real-life supply chains. This paper proposes an agent-based supply chain model to support distributed scheduling. A modified contract-net protocol (MCNP) is proposed to enable more information sharing among the enterprises than conventional CNP. Experimental simulation studies are conducted to compare and contrast the performances of the centralized [centralized heuristic (CTR)], conventional CNP, and MNCP approaches. The results show that MCNP outperforms CNP and performs comparably with CTR when project complexity is high in terms of the total supply chain operating cost. Moreover, it is found that although CTR is better than MCNP in terms of global performance, MCNP yields good schedule stability when facing unexpected disturbances View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A probabilistic framework for modeling and real-time monitoring human fatigue

    Publication Year: 2006 , Page(s): 862 - 875
    Cited by:  Papers (30)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (623 KB) |  | HTML iconHTML  

    A probabilistic framework based on the Bayesian networks for modeling and real-time inferring human fatigue by integrating information from various sensory data and certain relevant contextual information is introduced. A static fatigue model that captures the static relationships between fatigue, significant factors that cause fatigue, and various sensory observations that typically result from fatigue is first presented. Such a model provides mathematically coherent and sound basis for systematically aggregating uncertain evidences from different sources, augmented with relevant contextual information. The static model, however, fails to capture the dynamic aspect of fatigue. Fatigue is a cognitive state that is developed over time. To account for the temporal aspect of human fatigue, the static fatigue model is extended based on dynamic Bayesian networks. The dynamic fatigue model allows to integrate fatigue evidences not only spatially but also temporally, therefore, leading to a more robust and accurate fatigue modeling and inference. A real-time nonintrusive fatigue monitor was built based on integrating the proposed fatigue model with a computer vision system developed for extracting various visual cues typically related to fatigue. Performance evaluation of the fatigue monitor using both synthetic and real data demonstrates the validity of the proposed fatigue model in both modeling and real-time inference of fatigue View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An optimization approach to multiperson decision making based on different formats of preference information

    Publication Year: 2006 , Page(s): 876 - 889
    Cited by:  Papers (22)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (239 KB)  

    Multiperson decision making (MPDM) problems with different formats of preference information are one of the emerging research areas in decision analysis. Existing approaches for dealing with different preference formats tend to be unwieldy. This paper proposes a new method to solve the problem, in which the preference information on alternatives provided by experts can be represented in four different formats, namely: 1) utility values; 2) preference orderings; 3) multiplicative preference relations; and 4) fuzzy preference relations. An optimization model is constructed to integrate the four formats of preference and to assess ranking values of alternatives. The model is shown to be theoretically sound and complete via a series of theorems, and then a corresponding algorithm is developed. A numerical example is given to illustrate the procedure. The proposed approach is more efficient and simpler than existing approaches because it does not need to unify different formats of preferences or to aggregate individual preferences into a collective one. Therefore, it overcomes a major shortcoming of existing approaches that lose or distort the original preference information in the process of unifying the formats View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Measuring ambiguity in the evidence theory

    Publication Year: 2006 , Page(s): 890 - 903
    Cited by:  Papers (22)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (272 KB)  

    In the framework of evidence theory, ambiguity is a general term proposed by Klir and Yuan in 1995 to gather the two types of uncertainty coexisting in this theory: discord and nonspecificity. Respecting the five requirements of total measures of uncertainty in the evidence theory, different ways have been proposed to quantify the total uncertainty, i.e., the ambiguity of a belief function. Among them is a measure of aggregate uncertainty, called AU, that captures in an aggregate fashion both types of uncertainty. But some shortcomings of AU have been identified, which are that: 1) it is complicated to compute; 2) it is highly insensitive to changes in evidence; and 3) it hides the distinction between the two types of uncertainty that coexist in every theory of imprecise probabilities. To overcome the shortcomings, Klir and Smith defined the TU1 measure that is a linear combination of the AU measure and the nonspecificity measure N. But the TU1 measure cannot solve the problem of computing complexity, and brings a new problem with the choice of the linear parameter delta. In this paper, an alternative measure to AU for quantifying ambiguity of belief functions is proposed. This measure, called Ambiguity Measure (AM), besides satisfying all the requirements for general measures also overcomes some of the shortcomings of the AU measure. Indeed, AM overcomes the limitations of AU by: 1) minimizing complexity for minimum number of focal points; 2) allowing for sensitivity changes in evidence; and 3) better distinguishing discord and nonspecificity. Moreover, AM is a special case of TU1 that does not need the parameter delta View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A predictive control strategy for nonlinear NOx decomposition process in thermal power plants

    Publication Year: 2006 , Page(s): 904 - 921
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (658 KB) |  | HTML iconHTML  

    For the load-dependent nonlinear properties of the nitrogen oxide (NOx) decomposition process in thermal power plants, a local-linearization modeling approach based on a kind of global Nonlinear AutoRegressive Moving Average with eXogeneous input (NARMAX) model, named the exponential ARMAX (ExpARMAX) model, is presented. The ExpARMAX model has exponential-type coefficients that depend on the load of power plants and are estimated offline. In order to take advantage of existing conventional controllers and to reduce the cost of the industrial identification experiment, we propose a model structure that makes it possible for the ExpARMAX model to be identified using commercial operation data. On the basis of the model, a long-range predictive control strategy, without resorting to parameter estimation online, is investigated. The influence of some intermediate variables treated as process disturbances is studied, and the scheme using a set of multi-step-ahead predictors of the intermediate variables to improve control performance is also presented. A simulation study shows that the ExpARMAX model can give satisfactory modeling accuracy for the NOx decomposition (de-NOx) process in a large operating range, and the control algorithm proposed significantly improves the control performance View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • SwinDeW-a p2p-based decentralized workflow management system

    Publication Year: 2006 , Page(s): 922 - 935
    Cited by:  Papers (27)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (271 KB)  

    Workflow technology undoubtedly has been one of the most important domains of interest over the past decades, from both research and practice perspectives. However, problems such as potential poor performance, lack of reliability, limited scalability, insufficient user support, and unsatisfactory system openness are largely ignored. This research reveals that these problems are mainly caused by the mismatch between application nature, i.e., distributed, and system design, i.e., centralized management. Therefore, conventional approaches based on the client-server architecture have not addressed them properly so far. The authors abandon the dominating client-server architecture in supporting workflow because of its inherent limitations. Instead, the peer-to-peer infrastructure is used to provide genuinely decentralized workflow support, which removes the centralized data repository and control engine from the system. Consequently, both data and control are distributed so that workflow functions are fulfilled through the direct communication and coordination among the relevant peers. With the support of this approach, performance bottlenecks are likely to be eliminated while increased resilience to failure, enhanced scalability, and better user support are likely to be achieved. Moreover, this approach also provides a more open framework for service-oriented workflow over the Internet. This paper presents the authors' innovative decentralized workflow system design. The paper also covers the corresponding mechanisms for system functions and the Swinburne Decentralized Workflow prototype, which implements and demonstrates this design and functions View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A linear matrix inequality approach for guaranteed cost control of systems with state and input delays

    Publication Year: 2006 , Page(s): 936 - 942
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (135 KB) |  | HTML iconHTML  

    The robust control problem for linear systems with parameter uncertainties and time-varying delays is examined. By using an appropriate uncertainty description, a linear state-feedback control law is found, ensuring the closed-loop system's stability and a performance measure, in terms of the guaranteed cost. A linear matrix inequality objective minimization approach allows to determine the "optimal" choice of free parameters in the uncertainty description, leading to the minimal guaranteed cost View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Extending the decision field theory to model operators' reliance on automation in supervisory control situations

    Publication Year: 2006 , Page(s): 943 - 959
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1229 KB) |  | HTML iconHTML  

    Appropriate trust in and reliance on automation are critical for safe and efficient system operation. This paper fills an important research gap by describing a quantitative model of trust in automation. We extend decision field theory (DFT) to describe the multiple sequential decisions that characterize reliance on automation in supervisory control situations. Extended DFT (EDFT) represents an iterated decision process and the evolution of operator preference for automatic and manual control. The EDFT model predicts trust and reliance, and describes the dynamic interaction between operator and automation in a closed-loop fashion: the products of earlier decisions can transform the nature of later events and decisions. The simulation results show that the EDFT model captures several consistent empirical findings, such as the inertia of trust and the nonlinear characteristics of trust and reliance. The model also demonstrates the effects of different types of automation on trust and reliance. It is possible to expand the EDFT model for multioperator multiautomation situations View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New tools for decision analysts

    Publication Year: 2006 , Page(s): 960 - 967
    Cited by:  Papers (16)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (141 KB) |  | HTML iconHTML  

    This paper presents psychological research that can help people make better decisions. Decision analysts typically: 1) elicit outcome probabilities; 2) assess attribute weights; and 3) suggest the option with the highest overall value. Decision analysis can be challenging because of environmental and psychological issues. Fast and frugal methods such as natural frequency formats, frugal multiattribute models, and fast and frugal decision trees can address these issues. Not only are the methods fast and frugal, but they can also produce results that are surprisingly close to or even better than those obtained by more extensive analysis. Apart from raising awareness of these findings among engineers, the authors also call for further research on the application of fast and frugal methods to decision analysis View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Simulation-based evaluation of defuzzification-based approaches to fuzzy multiattribute decision making

    Publication Year: 2006 , Page(s): 968 - 977
    Cited by:  Papers (19)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (284 KB) |  | HTML iconHTML  

    This paper presents a simulation-based study to evaluate the performance of 12 defuzzification-based approaches for solving the general fuzzy multiattribute decision-making (MADM) problem requiring cardinal ranking of decision alternatives. These approaches are generated based on six defuzzification methods in conjunction with the simple additive weighting (SAW) method and the technique for order preference by similarity to the ideal solution method. The consistency and effectiveness of these approaches are examined in terms of four new objective performance measures, which are based on five evaluation indexes. The simulation result shows that the approaches, which are capable of using all the available information on fuzzy numbers effectively in the defuzzification process, produce more consistent ranking outcomes. In particular, the SAW method with the degree of dominance defuzzification is proved to be the overall best performed approach, which is followed by the SAW method with the area center defuzzification. These findings are of practical significance in real-world settings where the selection of the defuzzification-based approaches is required in solving the general fuzzy MADM problems under specific decision contexts View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Palm line extraction and matching for personal authentication

    Publication Year: 2006 , Page(s): 978 - 987
    Cited by:  Papers (26)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (613 KB) |  | HTML iconHTML  

    The palm print is a new and emerging biometric feature for personal recognition. The stable line features or "palm lines", which are comprised of principal lines and wrinkles, can be used to clearly describe a palm print and can be extracted in low-resolution images. This paper presents a novel approach to palm line extraction and matching for use in personal authentication. To extract palm lines, a set of directional line detectors is devised, and then these detectors are used to extract these lines in different directions. To avoid losing the details of the palm line structure, these irregular lines are represented using their chain code. To match palm lines, a matching score is defined between two palm prints according to the points of their palm lines. The experimental results show that the proposed approach can effectively discriminate between palm prints even when the palm prints are dirty. The storage and speed of the proposed approach can satisfy the requirements of a real-time biometric system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatically detecting criminal identity deception: an adaptive detection algorithm

    Publication Year: 2006 , Page(s): 988 - 999
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (459 KB) |  | HTML iconHTML  

    Identity deception, specifically identity concealment, is a serious problem encountered in the law enforcement and intelligence communities. In this paper, the authors discuss techniques that can automatically detect identity deception. Most of the existing techniques are experimental and cannot be easily applied to real applications because of problems such as missing values and large data size. The authors propose an adaptive detection algorithm that adapts well to incomplete identities with missing values and to large datasets containing millions of records. The authors describe three experiments to show that the algorithm is significantly more efficient than the existing record comparison algorithm with little loss in accuracy. It can identify deception having incomplete identities with high precision. In addition, it demonstrates excellent efficiency and scalability for large databases. A case study conducted in another law enforcement agency shows that the authors' algorithm is useful in detecting both intentional deception and unintentional data errors View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An efficient algorithm for optimal linear estimation fusion in distributed multisensor systems

    Publication Year: 2006 , Page(s): 1000 - 1009
    Cited by:  Papers (23)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (190 KB) |  | HTML iconHTML  

    Under the assumption of independent observation noises across sensors, Bar-Shalom and Campo proposed a distributed fusion formula for two-sensor systems, whose main calculation is the inverse of submatrices of the error covariance of two local estimates instead of the inverse of the error covariance itself. However, the corresponding simple estimation fusion formula is absent in a general distributed multisensor system. In this paper, an efficient iterative algorithm for distributed multisensor estimation fusion without any restrictive assumption on the noise covariance (i.e., the assumption of independent observation noises across sensors and the two-sensor system, and the direct computation of the Moore-Penrose generalized inverse of the joint error covariance of local estimates are not necessary) is presented. At each iteration, only the inverse or generalized inverse of a matrix having the same dimension as the error covariance of a single-sensor estimate is required. In fact, the proposed algorithm is a generalization of Bar-Shalom and Campo's fusion formula and reduces the computational complexity significantly since the number of iterative steps is less than the number of sensors. An example of a three-sensor system shows how to implement the specific iterative steps and reduce the computational complexities View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Learning rule representations from data

    Publication Year: 2006 , Page(s): 1010 - 1028
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (609 KB)  

    We discuss a procedure which extracts statistical and entropic information from data in order to discover Boolean rules underlying them. We work within a granular computing framework where logical implications between statistics on the observed sample and properties on the whole data population are stressed in terms of both probabilistic and possibilistic measures of the inferred rules. With the main constraint that the class of rules is not known in advance, we split the building of the hypotheses on them in various levels of increasing description complexity, balancing the feasibility of the learning procedure with the understandability and reliability of the formulas that are discovered. We appreciate the entire learning system in terms of truth tables, formula lengths, and computational resources through a set of case studies View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Approximation capabilities of hierarchical hybrid systems

    Publication Year: 2006 , Page(s): 1029 - 1039
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (186 KB)  

    This paper investigates the approximation capabilities of hierarchical hybrid systems, which are motivated by research in hierarchical fuzzy systems, hybrid intelligent systems, and modeling of model partly known systems. For a function (system) with known hierarchical structure (i.e., one that can be represented as a composition of some simpler and lower dimensional subsystems), it is shown that hierarchical hybrid systems have the structure approximation capability in the sense that such a hybrid approximation scheme can approximate both the overall system and all the subsystems to any desired degree of accuracy. For a function (system) with unknown hierarchical structure, Kolmogorov's theorem is used to construct the hierarchical structure of the given function (system). It is then shown that hierarchical hybrid systems are universal approximators View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Special issue on enterprise services computing and industrial applications

    Publication Year: 2006 , Page(s): 1040
    Save to Project icon | Request Permissions | PDF file iconPDF (113 KB)  
    Freely Available from IEEE
  • IEEE Systems, Man, and Cybernetics Society Information

    Publication Year: 2006 , Page(s): c3
    Save to Project icon | Request Permissions | PDF file iconPDF (26 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Systems, Man, and Cybernetics—Part A: Systems and Humans Information for authors

    Publication Year: 2006 , Page(s): c4
    Save to Project icon | Request Permissions | PDF file iconPDF (35 KB)  
    Freely Available from IEEE

Aims & Scope

The fields of systems engineering and human machine systems: systems engineering includes efforts that involve issue formulation, issue analysis and modeling, and decision making and issue interpretation at any of the lifecycle phases associated with the definition, development, and implementation of large systems.

 

This Transactions ceased production in 2012. The current retitled publication is IEEE Transactions on Systems, Man, and Cybernetics: Systems.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Dr. Witold Pedrycz
University of Alberta