Notification:
We are currently experiencing intermittent issues impacting performance. We apologize for the inconvenience.
By Topic

Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on

Issue 5 • Date Sept. 2010

Filter Results

Displaying Results 1 - 24 of 24
  • Table of contents

    Publication Year: 2010 , Page(s): C1 - 869
    Save to Project icon | Request Permissions | PDF file iconPDF (50 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Systems, Man, and Cybernetics—Part A: Systems and Humans publication information

    Publication Year: 2010 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (39 KB)  
    Freely Available from IEEE
  • Special Issue on Model-Based Diagnostics

    Publication Year: 2010 , Page(s): 870 - 873
    Cited by:  Papers (1)  |  Patents (1)
    Save to Project icon | Request Permissions | PDF file iconPDF (153 KB)  
    Freely Available from IEEE
  • Probabilistic Model-Based Diagnosis: An Electrical Power System Case Study

    Publication Year: 2010 , Page(s): 874 - 885
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (782 KB) |  | HTML iconHTML  

    We present in this paper a case study of the probabilistic approach to model-based diagnosis. Here, the diagnosed system is a real-world electrical power system (EPS), i.e., the Advanced Diagnostic and Prognostic Testbed (ADAPT) located at the NASA Ames Research Center. Our probabilistic approach is formally well founded and based on Bayesian networks (BNs) and arithmetic circuits (ACs). We pay special attention to meeting two of the main challenges often associated with real-world application of model-based diagnosis technologies: model development and real-time reasoning. To address the challenge of model development, we develop a systematic approach to representing EPSs as BNs, supported by an easy-to-use specification language. To address the real-time reasoning challenge, we compile BNs into ACs. AC evaluation (ACE) supports real-time diagnosis by being predictable, fast, and exact. In experiments with the ADAPT BN, which contains 503 discrete nodes and 579 edges and produces accurate results, the time taken to compute the most probable explanation using ACs has a mean of 0.2625 ms and a standard deviation of 0.2028 ms. In comparative experiments, we found that, while the variable elimination and join tree propagation algorithms also perform very well in the ADAPT setting, ACE was an order of magnitude or more faster. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust Fault Diagnosis for Atmospheric Reentry Vehicles: A Case Study

    Publication Year: 2010 , Page(s): 886 - 899
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1809 KB) |  | HTML iconHTML  

    This paper deals with the design of robust model-based fault detection and isolation (FDI) systems for atmospheric reentry vehicles. This work draws expertise from actions undertaken within a project at the European level, which develops a collaborative effort between the University of Bordeaux, the European Space Agency, and European Aeronautic Defence and Space Company Astrium on innovative and robust strategies for reusable launch vehicles (RLVs) autonomy. Using an H/H- setting, a robust residual-based scheme is developed to diagnose faults on the vehicle wing-flap actuators. This design stage is followed by an original and specific diagnosis-oriented analysis phase based on the calculation of the generalized structured singular value. The latter provides a necessary and sufficient condition for robustness and FDI fault sensitivity over the whole vehicle flight trajectory. A key feature of the proposed approach is that the coupling between the in-plane and out-of-plane vehicle motions, as well as the effects that faults could have on the guidance, navigation, and control performances, are explicitly taken into account within the design procedure. The faulty situations are selected by a prior trimmability analysis to determine those for which the remaining healthy control effectors are able to maintain the vehicle around its center of gravity. Finally, some performance indicators including detection time, required onboard computational effort, and CPU time consumption are assessed and discussed. Simulation results are based on a nonlinear benchmark of the HL-20 vehicle under realistic operational conditions during the autolanding phase. The Monte Carlo results are quite encouraging, illustrating clearly the effectiveness of the proposed technique and suggesting that this solution could be considered as a viable candidate for future RLV programs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fault Diagnosis Using a Timed Discrete-Event Approach Based on Interval Observers: Application to Sewer Networks

    Publication Year: 2010 , Page(s): 900 - 916
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1683 KB) |  | HTML iconHTML  

    This paper proposes a fault diagnosis method using a timed discrete-event approach based on interval observers that improves the integration of fault detection and isolation tasks. The interface between fault detection and fault isolation considers the activation degree and the occurrence time instant of the diagnostic signals using a combination of several theoretical fault signature matrices that store the knowledge of the relationship between diagnostic signals and faults. The fault isolation module is implemented using a timed discrete-event approach that recognizes the occurrence of a fault by identifying a unique sequence of observable events (fault signals). The states and transitions that characterize such a system can directly be inferred from the relation between fault signals and faults. The proposed fault diagnosis approach has been motivated by the problem of detecting and isolating faults of the Barcelona's urban sewer system limnimeters (level meter sensors). The results obtained in this case study illustrate the benefits of using the proposed approach in comparison with the standard fault detection and isolation approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Comprehensive Diagnosis Methodology for Complex Hybrid Systems: A Case Study on Spacecraft Power Distribution Systems

    Publication Year: 2010 , Page(s): 917 - 931
    Cited by:  Papers (23)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (767 KB) |  | HTML iconHTML  

    The application of model-based diagnosis schemes to real systems introduces many significant challenges, such as building accurate system models for heterogeneous systems with complex behaviors, dealing with noisy measurements and disturbances, and producing valuable results in a timely manner with limited information and computational resources. The Advanced Diagnostics and Prognostics Testbed (ADAPT), which was deployed at the NASA Ames Research Center, is a representative spacecraft electrical power distribution system that embodies a number of these challenges. ADAPT contains a large number of interconnected components, and a set of circuit breakers and relays that enable a number of distinct power distribution configurations. The system includes electrical dc and ac loads, mechanical subsystems (such as motors), and fluid systems (such as pumps). The system components are susceptible to different types of faults, i.e., unexpected changes in parameter values, discrete faults in switching elements, and sensor faults. This paper presents Hybrid Transcend, which is a comprehensive model-based diagnosis scheme to address these challenges. The scheme uses the hybrid bond graph modeling language to systematically develop computational models and algorithms for hybrid state estimation, robust fault detection, and efficient fault isolation. The computational methods are implemented as a suite of software tools that enable diagnostic analysis and testing through simulation, diagnosability studies, and deployment on the experimental testbed. Simulation and experimental results demonstrate the effectiveness of the methodology. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Pervasive Diagnosis

    Publication Year: 2010 , Page(s): 932 - 944
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (843 KB) |  | HTML iconHTML  

    In model-based production, a planner uses a system description to create plans that achieve production goals. The same description can be used by model-based diagnosis to infer the condition of components from sensor data. When production is realized by a sequence of plans, prior work has demonstrated that diagnosis can be used to adapt the plans to compensate for component degradation. However, the sources of diagnostic information are severely limited. Diagnosis must either make inferences from observations during production over which it has no control (passive diagnosis), or production must be halted to introduce diagnostic-specific plans (explicit diagnosis). We observe that the declarative nature of the model-based approach allows the planner to achieve production goals in multiple ways. This flexibility is exploited by a novel paradigm, i.e., pervasive (active) diagnosis, which constructs informative production plans that simultaneously achieve production goals while uncovering additional diagnostic information about the condition of components. We present an efficient heuristic search for these informative production plans and show through experiments on a model of an industrial digital printing press that the theoretical increase in long-run productivity can be realized on practical real-time systems. We obtain higher long-run productivity than a decoupled combination of planning and diagnosis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Intrinsic Hurdles in Applying Automated Diagnosis and Recovery to Spacecraft

    Publication Year: 2010 , Page(s): 945 - 958
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (617 KB) |  | HTML iconHTML  

    Experience developing and deploying model-based diagnosis (MBD) and recovery and other model-based technologies on a variety of testbeds and flight experiments led us to explore why our expectations about the impact of MBD on spacecraft operations have not been matched by effective benefits in the field. By MBD, we mean the problem of observing a mechanical, software, or other system and determining what failures its internal components have suffered using a generic inference algorithm and a model of the system's components and interconnections. These techniques are very attractive, suggesting a vision of machines that repair themselves, reduced costs for all kinds of endeavors, spacecraft that continue their missions even when failing, and so on. This promise inspired a broad range of activities, including our involvement over several years in flying the Livingstone and L2 onboard MBD and recovery systems as experiments on Deep Space 1 and Earth Observer 1 spacecraft. Yet, in the end, no spacecraft project adopted the technology in operations nor flew additional flight experiments. To our knowledge, no spacecraft project has adopted any other MBD technology in operations. In this paper, we present a cost/benefit analysis for MBD using expectations and experiences with Livingstone as an example. We provide an overview of common techniques for making spacecraft robust, citing fault protection schemes from recent missions. We lay out the cost, benefit, and risk advantages associated with onboard MBD and use the examples to probe each expected advantage in turn. We suggest a method for evaluating a mission that has already been flown and providing a rough estimate of the maximum value that a perfect onboard diagnosis and recovery system would have provided. By unpacking the events that must occur in order to provide value, we also identify the factors needed to compute the expected value that would be provided by a real diagnosis and recovery system. We then di- - scuss the expected value we would estimate that such a system would have had for the Mars Exploration Rover mission. This has allowed us to identify the specific assumptions that made our expectations for MBD in this domain incorrect. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Benchmark Diagnostic Model Generation System

    Publication Year: 2010 , Page(s): 959 - 981
    Cited by:  Papers (1)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2082 KB) |  | HTML iconHTML  

    It is critical to use automated generators for synthetic models and data given the sparsity of benchmark models for empirical analysis and the cost of generating models by hand. We describe an automated generator for benchmark models that is based on using a compositional modeling framework and employs graphical models for the system topology. We propose a three-step process for synthetic model generation: 1) domain analysis; 2) topology generation; and 3) system-level behavioral model generation. To demonstrate our approach on two highly different domains, we generate models using this process for circuits drawn from the International Symposium on Circuits and Systems benchmark suite and a process-control system. We then analyze the synthetic models according to two criteria: 1) topological fidelity and 2) diagnostic efficiency. Based on this comparison, we identify parameters necessary for the autogenerated models to generate benchmark diagnosis circuit and process-control models with realistic properties. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Emotional Model for a Guide Robot

    Publication Year: 2010 , Page(s): 982 - 992
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (567 KB) |  | HTML iconHTML  

    This paper presents the structure and operation principles of an emotional model. It is a dynamic state-space model where state variables represent the emotional state. Model matrices are time variant. A method based on fuzzy inference systems is developed to calculate the matrix coefficients at each time step. The simulation results of the guide robot case study are presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Attentional-Resource Effectiveness Measures in Monitoring and Detection Tasks in Nuclear Power Plants

    Publication Year: 2010 , Page(s): 993 - 1008
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (770 KB) |  | HTML iconHTML  

    Two measures of attentional-resource effectiveness in monitoring and detection tasks in nuclear power plants are developed based on cost-benefit principle and are validated in experimental studies. The underlying principle of the measures is that information sources should be selectively attended according to their importance. One of the two measures is the fixation to importance ratio (FIR), which represents attentional resource (eye fixations) spent on an information source compared to the importance of the information source. The other measure is selective attention effectiveness (SAE), which incorporates the FIRs of all information sources. The FIR represents the specific effectiveness of an information source, whereas the SAE represents the overall effectiveness of all information sources. Frequency and the duration of eye fixations of an operator on information sources are used as attentional resource. The analytic hierarchy process was used to evaluate the importance of information sources. Experiments were conducted to validate the proposed measure. From the results of the experiments, the FIR and the SAE are concluded to be promising measures of effectiveness in monitoring and detection during complex diagnostic tasks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Knowledge-Discounted Event Detection in Sports Video

    Publication Year: 2010 , Page(s): 1009 - 1024
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1418 KB) |  | HTML iconHTML  

    Automatic events annotation is an essential requirement for constructing an effective sports video summary. Researchers worldwide have actively been seeking the most robust and powerful solutions to detect and classify key events (or highlights) in different sports. Most of the current and widely used approaches have employed rules that model the typical pattern of audiovisual features within particular sport events. These rules are mainly based on manual observation and heuristic knowledge; therefore, machine learning can be used as an alternative. To bridge the gap between the two alternatives, we propose a hybrid approach, which integrates statistics into logical rule-based models during highlight detection. We have also successfully pioneered the use of play-break segment as a universal scope of detection and a standard set of features that can be applied for different sports, including soccer, basketball, and Australian football. The proposed method uses a limited amount of domain knowledge, making this method less subjective and more robust for different sports. An experiment using a large data set of sports video has demonstrated the effectiveness and robustness of the algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic Modeling for Performance Evaluation of Inventory and Outbound Distribution

    Publication Year: 2010 , Page(s): 1025 - 1044
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1568 KB) |  | HTML iconHTML  

    This paper proposes a methodology for modeling and evaluating supply chains based on generalized stochastic Petri net (GSPN) components. The proposed modeling process is based on a bottom-up approach, which assures desirable model properties that start from a set of predefined modules for typical supply chain entities. A compositional algebra that formally defines GSPN compositions is presented. The proposed methodology is supported by an implementation of the methodology in the stochastic logistics optimizer tool (SLOT). An industrial case study has been conducted to show the benefits of the approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Extending the Adaptability of Reference Models

    Publication Year: 2010 , Page(s): 1045 - 1056
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1227 KB) |  | HTML iconHTML  

    Reference models are an important aid for business process modeling and design. Their aim is to capture domain knowledge and assist in the design of enterprise-specific business processes. The application of reference models for process design requires guidance in reusing these models and flexibility in adapting them to specific enterprises. One popular modeling language for specifying reference models is event-driven process chain (EPC), which has been extended to express configurable reference models, i.e., configurable EPC (C-EPC). These models provide explicit reuse guidance but allow a limited level of flexibility following a reuse by configuration approach. To increase the level of adaptability of reference models, in this paper, we propose to utilize the application-based domain modeling (ADOM) approach to specify and apply reference models by using EPC. ADOM supports the enforcement of reference model constraints while allowing high levels of flexibility, adaptability, and variability in the business processes of particular enterprises. This paper presents the syntax and semantics of the proposed approach, called ADOM-EPC, and its specialization and configuration capabilities. ADOM-EPC is evaluated by comparing it to C-EPC, a leading approach for reference modeling and reuse, in terms of expressiveness and comprehensibility. Although the expressiveness of ADOM-EPC, i.e., its set of specified reuse operations, exceeds that of C-EPC, the understandability of the two types of reference models is similar. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Game-Theoretic Validation and Analysis of Air Combat Simulation Models

    Publication Year: 2010 , Page(s): 1057 - 1070
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (549 KB) |  | HTML iconHTML  

    This paper presents a new game-theoretic approach toward the validation of discrete-event air combat (AC) simulation models and simulation-based optimization. In this approach, statistical techniques are applied for estimating games based on data produced by a simulation model. The estimation procedure is presented in cases involving games with both discrete and continuous decision variables. The validity of the simulation model is assessed by comparing the properties of the estimated games to actual practices in AC. These games are also applied for simulation-based optimization in a two-sided setting in which the action of the opponent is taken into account. In optimization, the estimated games enable the study of effectiveness of AC tactics as well as aircraft, weapons, and avionics configurations. The game-theoretic approach enhances existing methods for the validation of discrete-event simulation models and techniques for simulation-based optimization by incorporating the inherent game setting of AC into the analysis. It also provides a novel game-theoretic perspective to simulation metamodeling which is used to facilitate simulation analysis. The utilization of the game-theoretic approach is illustrated by analyzing simulation data obtained with an existing AC simulation model. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Integrating Ontologies Based on P2P Mappings

    Publication Year: 2010 , Page(s): 1071 - 1082
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (750 KB) |  | HTML iconHTML  

    Large organizations usually have difficulties in dealing with the exponential growth of information. Therefore, there is a high demand for innovative solutions to deal with such growth and to integrate such information. This paper proposes a new approach, called Emergent Ontologies (EOs), toward the generation of a single organizational ontology through which it becomes possible to browse all information of an organization. This proposal considers that, typically, an organization's information is distributed in peers, and that in each peer, this information could be represented through a different ontology. As each peer of an organization needs to exchange information, peer-to-peer mappings are created to bridge these ontologies. Based on these mappings, this paper proposes a set of heuristics, which are used to generate the EO. These heuristics have been incorporated into the OntoEmerge system, a prototype developed to facilitate the creation of an initial organization ontology. In order to evaluate such a system and the heuristics behind it, some experiments have been performed. A quantitative and qualitative analysis of these experiments is also presented in this paper. This approach presents encouraging results, and this fact can be considered as a starting point for the creation of organizational ontologies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Efficient P2P Content Distribution System Based on Altruistic Demand and Recoding Dissemination

    Publication Year: 2010 , Page(s): 1083 - 1093
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (842 KB) |  | HTML iconHTML  

    Peer-to-peer (P2P) content distribution systems based on random linear combination coding schemes outperform traditional block forwarding and source coding systems but have large storage requirements and high computation and communication overheads. To resolve this problem, this paper presents an efficient and scalable P2P content dissemination system based on novel altruistic demand and recoding dissemination mechanisms. In the proposed approach, the shared-content file is segmented and encoded using Reed-Solomon code at a seed. Downstream peers wishing to obtain the file utilize an altruistic demand mechanism to issue demand requests for coded blocks which are useful not only to themselves but also to their neighbors. Upon receiving these requests, the upstream peers utilize a recoding dissemination mechanism to provide the downstream peers with either an existing useful coded block or a new coded block produced using a Lagrange polynomial interpolation method. The two mechanisms rapidly increase the diversity of the coded blocks within the network and therefore provide an effective solution to the missing last block problem. In addition, it is shown that the proposed content distribution system demonstrates a substantial improvement over P2P systems based on random linear combination coding in terms of a lower storage requirement, reduced computation and communication costs, and an improved download efficiency. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Uncertainty Evaluation Through Mapping Identification in Intensive Dynamic Simulations

    Publication Year: 2010 , Page(s): 1094 - 1104
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (532 KB) |  | HTML iconHTML  

    We study how the dependence of a simulation output on an uncertain parameter can be determined when simulations are computationally expensive and so can only be run for very few parameter values. Specifically, the methodology that is developed-known as the probabilistic collocation method (PCM)-permits selection of these few parameter values, so that the mapping between the parameter and the output can be approximated well over the likely parameter values, using a low-order polynomial. Several new analyses are developed concerning the ability of PCM to predict the mapping structure, as well as output statistics. A holistic methodology is also developed for the typical case where the uncertain parameter's probability distribution is unknown, and instead, only depictive moments or sample data (which possibly depend on known regressors) are available. Finally, the application of PCM to weather-uncertainty evaluation in air traffic flow management is discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using the Analytic Hierarchy Process to Examine Judgment Consistency in a Complex Multiattribute Task

    Publication Year: 2010 , Page(s): 1105 - 1115
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (899 KB) |  | HTML iconHTML  

    This paper investigates the impact of framing and time pressure on human judgment performance in a complex multiattribute judgment task. We focus on the decision process of human participants who must choose between pairwise alternatives in a resource-allocation task. We used the Analytic Hierarchy Process (AHP) to calculate the relative weights of the four alternatives (i.e., C1, C2, C3, and C4) and the judgment consistency. Using the AHP, we examined two sets of hypotheses that address the impact of task conditions on the weight prioritization of choice alternatives and the internal consistency of the judgment behavior under varying task conditions. The experiment simulated the allocation of robotic assets across the battlefield to collect data about an enemy. Participants had to make a judgment about which asset to allocate to a new area by taking into account three criteria related to the likelihood of success. We manipulated the information frame and the nature of the task. We found that, in general, participants gave significantly different weights to the same alternatives under different frames and task conditions. Specifically, in terms of ln-transformed priority weights, participants gave significantly lower weights to C2 and C4 and higher weight to C3 under gain frame than under loss frame, and also, under different task conditions (i.e., Tasks #1, #2, and #3), participants gave significantly higher weight to C4 in Task #1, lower weights to C1 and C4, higher weight to C3 in Task #2, and lower weight to C3 in Task #3. Furthermore, we found that the internal consistency of the decision behavior was worse, first, in the loss frame than the gain frame and, second, under time pressure. Our methodology complements utility-theoretic frameworks by assessing judgment consistency without requiring the use of task-performance outcomes. T- - his work is a step toward establishing a coherence criterion to investigate judgment under naturalistic conditions. The results will be useful for the design of multiattribute interfaces and decision aiding tools for real-time judgments in time-pressured task environments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal Pricing and Stocking Decisions for Newsvendor Problem With Value-at-Risk Consideration

    Publication Year: 2010 , Page(s): 1116 - 1119
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (149 KB) |  | HTML iconHTML  

    Motivated by the popularity of VaR measure in financial applications, we study the classical newsvendor problem with Value-at-Risk (VaR) consideration and price-dependent demands. We first investigate the problem's structural properties and derive analytically the optimal joint stocking and pricing decisions. We then explore the difference between the optimal decisions under the VaR formulation and the classical expected profit-maximization model. Finally, we reveal an interesting analytical relationship between the inventory service level and the VaR measure. Insights are generated. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fuzzy Multiple Criteria Hierarchical Group Decision-Making Based on Interval Type-2 Fuzzy Sets

    Publication Year: 2010 , Page(s): 1120 - 1128
    Cited by:  Papers (14)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (975 KB) |  | HTML iconHTML  

    In this paper, we present a new method for handling fuzzy multiple criteria hierarchical group decision-making problems based on arithmetic operations and fuzzy preference relations of interval type-2 fuzzy sets. Because the time complexity of the proposed method is O(nk), where n is the number of criteria and k is the number of decision-makers, it is more efficient than Wu and Mendel's method, whose time complexity is O(mnk), where m is the number of α-cuts, n is the number of criteria and k is the number of decision-makers. Moreover, the proposed method can overcome another drawback of Wu and Mendel's method, i.e., it can handle evaluating values represented by nonnormal interval type-2 fuzzy sets. The proposed method provides us with a useful way to handle fuzzy multiple criteria hierarchical group decision-making problems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Systems, Man, and Cybernetics Society Information

    Publication Year: 2010 , Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (29 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Systems, Man, and Cybernetics—Part A: Systems and Humans Information for authors

    Publication Year: 2010 , Page(s): C4
    Save to Project icon | Request Permissions | PDF file iconPDF (35 KB)  
    Freely Available from IEEE

Aims & Scope

The fields of systems engineering and human machine systems: systems engineering includes efforts that involve issue formulation, issue analysis and modeling, and decision making and issue interpretation at any of the lifecycle phases associated with the definition, development, and implementation of large systems.

 

This Transactions ceased production in 2012. The current retitled publication is IEEE Transactions on Systems, Man, and Cybernetics: Systems.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Dr. Witold Pedrycz
University of Alberta