By Topic

Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on

Issue 6 • Date Nov. 2008

Filter Results

Displaying Results 1 - 25 of 29
  • Table of contents

    Publication Year: 2008 , Page(s): C1 - 1197
    Save to Project icon | Request Permissions | PDF file iconPDF (48 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Systems, Man, and Cybernetics—Part A: Systems and Humans publication information

    Publication Year: 2008 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (35 KB)  
    Freely Available from IEEE
  • An Expedient Wireless Sensor Automaton With System Scalability and Efficiency Benefits

    Publication Year: 2008 , Page(s): 1198 - 1209
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1089 KB) |  | HTML iconHTML  

    Wireless sensor networks are characterized by energy-constrained nodes that are tasked with collecting and forwarding environmental parameters with a requisite measurement, which is spatial and temporal fidelity. At the system level, fidelity is not the only issue of interest but also the achievement of a low-cost solution and a long life for the deployed network. As such, sensor nodes should be low in complexity and should achieve the requisite fidelity requirements, with minimum communication and coordination. This paper proposes that these nodes can operate as automata and still achieve the overall system performance requirements with minimal control. This paper presents and analyzes an automaton architecture and a control strategy designed to maintain spatial fidelity as the performance objective. In particular, we show the following: 1) that the architecture permits control of the number of nodes actively transmitting information in each epoch (denoted by Q ); 2) that the variance of Q can be controlled and, particularly, can be set to a value significantly less than that of a Bernoulli-process benchmark (i.e., the architecture is expedient with respect to the control of this variance); 3) that the control strategy is scalable over several orders of magnitudes; and 4) that the methodology is efficient in approaching benchmark performance with respect to energy usage. The proposed methodology has the following specific advantages over the benchmark: 1) The total number of sensors deployed in the network need not be known, and 2) the strategy maintains a robust control of Q over changes in the commanded value and changes in the number of deployed sensors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Predicting Interactions Between Agents in Agent-Based Modeling and Simulation of Sociotechnical Systems

    Publication Year: 2008 , Page(s): 1210 - 1220
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (473 KB) |  | HTML iconHTML  

    Agent-based modeling and simulation are a valuable research tools for the analysis of dynamic and emergent phenomena of large-scale complex sociotechnical systems. The dynamic behavior of such systems includes both the individual behavior of heterogeneous agents within the system and the emergent behavior arising from interactions between agents; both must be accurately modeled and efficiently executed in simulations. This paper provides a timing and prediction mechanism for the accurate modeling of interactions among agents, correspondingly increasing the computational efficiency of agent-based simulations. A method for assessing the accuracy of interaction prediction methods is described based on signal detection theory. An intelligent interaction timing agent framework that uses a neural network to predict the timing of interactions between heterogeneous agents is presented; this framework dramatically improves the accuracy of interaction timing without requiring detailed scenario-specific modeling efforts for each simulation configuration. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Ecological Interface Design of a Tactical Airborne Separation Assistance Tool

    Publication Year: 2008 , Page(s): 1221 - 1233
    Cited by:  Papers (21)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (777 KB) |  | HTML iconHTML  

    In a free-flight airspace environment, pilots have more freedom to choose user-preferred trajectories. An onboard pilot support system is needed that exploits travel freedom while maintaining spatial separation with other traffic. Ecological interface design is used to design an interface tool that assists pilots with the tactical planning of efficient conflict-free trajectories toward their destination. Desired pilot actions emerge from the visualization of workspace affordances in terms of a suitable description of aircraft (loco)motion. Traditional models and descriptions for aircraft motion cannot be applied efficiently for this purpose. Through functional modeling, more suitable locomotion models for trajectory planning are analyzed. As a result, a novel interface, the state vector envelope, is presented that is intended to provide the pilot with both low-level information, allowing direct action, and high-level information, allowing conflict understanding and situation awareness. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Model for Team Optimization: The Effects of Uncertainty on Interaction

    Publication Year: 2008 , Page(s): 1234 - 1247
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (568 KB) |  | HTML iconHTML  

    The objective of this paper is twofold. First, a new model of team optimization is formulated. Second, this model is used to investigate the effects of uncertainty on interaction. A model of team optimization that encompasses the classical team decision problem is introduced. This model is suitable for problems where agents' posterior information is not shared and is possibly inconsistent with the mutual prior information. For a broad class of problems, every agent's dominant beliefs about the posterior information of the other agents are derived. Then, the level of interaction and the level of uncertainty are defined, and the relationship between these two levels is studied. It is shown that the optimal level of interaction decreases as the level of uncertainty increases, and in some cases, the optimal level of interaction tends to zero, suggesting that the optimization problem may be decomposed. The theoretical results are demonstrated on sensor network examples. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Human–Computer Interface Using Symmetry Between Eyes to Detect Gaze Direction

    Publication Year: 2008 , Page(s): 1248 - 1261
    Cited by:  Papers (15)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (632 KB) |  | HTML iconHTML  

    In the cases of paralysis so severe that a person's ability to control movement is limited to the muscles around the eyes, eye movements or blinks are the only way for the person to communicate. Interfaces that assist in such communication are often intrusive, require special hardware, or rely on active infrared illumination. A nonintrusive communication interface system called EyeKeys was therefore developed, which runs on a consumer-grade computer with video input from an inexpensive Universal Serial Bus camera and works without special lighting. The system detects and tracks the person's face using multiscale template correlation. The symmetry between left and right eyes is exploited to detect if the person is looking at the camera or to the left or right side. The detected eye direction can then be used to control applications such as spelling programs or games. The game ldquoBlockEscaperdquo was developed to evaluate the performance of EyeKeys and compare it to a mouse substitution interface. Experiments with EyeKeys have shown that it is an easily used computer input and control device for able-bodied people and has the potential to become a practical tool for people with severe paralysis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Providing Justifications in Recommender Systems

    Publication Year: 2008 , Page(s): 1262 - 1272
    Cited by:  Papers (19)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (824 KB) |  | HTML iconHTML  

    Recommender systems are gaining widespread acceptance in e-commerce applications to confront the ldquoinformation overloadrdquo problem. Providing justification to a recommendation gives credibility to a recommender system. Some recommender systems (Amazon.com, etc.) try to explain their recommendations, in an effort to regain customer acceptance and trust. However, their explanations are not sufficient, because they are based solely on rating or navigational data, ignoring the content data. Several systems have proposed the combination of content data with rating data to provide more accurate recommendations, but they cannot provide qualitative justifications. In this paper, we propose a novel approach that attains both accurate and justifiable recommendations. We construct a feature profile for the users to reveal their favorite features. Moreover, we group users into biclusters (i.e., groups of users which exhibit highly correlated ratings on groups of items) to exploit partial matching between the preferences of the target user and each group of users. We have evaluated the quality of our justifications with an objective metric in two real data sets (Reuters and MovieLens), showing the superiority of the proposed method over existing approaches. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Low-Complexity Parabolic Lip Contour Model With Speaker Normalization for High-Level Feature Extraction in Noise-Robust Audiovisual Speech Recognition

    Publication Year: 2008 , Page(s): 1273 - 1280
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (210 KB) |  | HTML iconHTML  

    This paper proposes a novel low-complexity lip contour model for high-level optic feature extraction in noise-robust audiovisual (AV) automatic speech recognition systems. The model is based on weighted least-squares parabolic fitting of the upper and lower lip contours, does not require the assumption of symmetry across the horizontal axis of the mouth, and is therefore realistic. The proposed model does not depend on the accurate estimation of specific facial points, as do other high-level models. Also, we present a novel low-complexity algorithm for speaker normalization of the optic information stream, which is compatible with the proposed model and does not require parameter training. The use of the proposed model with speaker normalization results in noise robustness improvement in AV isolated-word recognition relative to using the baseline high-level model. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Game and Information Theory Analysis of Electronic Countermeasures in Pursuit-Evasion Games

    Publication Year: 2008 , Page(s): 1281 - 1294
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1414 KB) |  | HTML iconHTML  

    Two-player pursuit-evasion games in the literature typically either assume both players have perfect knowledge of the opponent's positions or use primitive sensing models. This unrealistically skews the problem in favor of the pursuer who needs only maintain a faster velocity at all turning radii. In real life, an evader usually escapes when the pursuer no longer knows the evader's position. In our previous work, we modeled pursuit evasion without perfect information as a two-player bimatrix game by using a realistic sensor model and information theory to compute game-theoretic payoff matrices. That game has a saddle point when the evader uses strategies that exploit sensor limitations, whereas the pursuer relies on strategies that ignore the sensing limitations. In this paper, we consider, for the first time, the effect of many types of electronic countermeasures (ECM) on pursuit-evasion games. The evader's decision to initiate its ECM is modeled as a function of the distance between the players. Simulations show how to find optimal strategies for ECM use when initial conditions are known. We also discuss the effectiveness of different ECM technologies in pursuit-evasion games. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Effective Model and Management Scheme of Personal Space for Ubiquitous Computing Applications

    Publication Year: 2008 , Page(s): 1295 - 1311
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1751 KB) |  | HTML iconHTML  

    In ubiquitous computing, the computing environment for a user is no longer a fixed computer, but a space that includes multiple heterogeneous devices that can change dynamically according to the user's situation. Managing the space is an essential part of ubiquitous computing because application services in this environment need to be adaptive to the users' current situation. However, previous approaches oversimplified the model of personal space and demonstrated some limitations in developing user-centric adaptive services. In this paper, we propose an effective personal space model, defined as virtual personal world (VPW), and a sophisticated method to manage personal spaces. The VPW represents a personal space by using a set of stateful elements and their relationships, which are denoted as virtual objects, services, and neighbors. The VPW provides expressive and accurate information for a particular user, thereby helping application services adapt their operations for the user dynamically. Our conceptual model is designed as personal operating middleware software that manages the user's VPW and provides application services. Experimental results show that the prototype system based on VPW has reasonable performance in running application services and managing personal spaces. We also found that the VPW model can increase the average user satisfaction rate by up to 40% compared to other models in our simulation environment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Toward Behavioral Web Services Using Policies

    Publication Year: 2008 , Page(s): 1312 - 1324
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (709 KB) |  | HTML iconHTML  

    Making Web services context-aware is a challenge. This is like making Web service expose appropriate behaviors in response to changes detected in the environment. Context awareness requires a review and extension of the current execution model of Web services. This paper discusses the seamless combination of context and policy to manage behaviors that Web services expose during composition and in response to changes in the environment. For this purpose, a four-layer approach is devised. These layers are denoted by policy, user, Web service, and resource. In this approach, behavior management and binding are subject to executing policies of types permission, obligation, restriction, and dispensation. A prototype that illustrates how context and policy are woven into Web services composition scenarios is presented as well. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Firing Sequences Estimation in Vector Space Over Z_{3} for Ordinary Petri Nets

    Publication Year: 2008 , Page(s): 1325 - 1336
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1206 KB) |  | HTML iconHTML  

    Event sequences estimation is an important issue for fault diagnosis of Discrete event systems, so far as fault events cannot directly be measured. This paper is about event sequences estimation with Petri net models. Events are assumed to be represented with transitions, and firing sequences are estimated from measurements of the marking variation. Estimation with and without measurement errors are discussed in n-dimensional vector space over alphabet Z 3 = {-1, 0, 1}. Sufficient conditions and estimation algorithms are provided. Performance is evaluated, and the efficiency of the approach is illustrated on two examples from manufacturing engineering. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Selective Siphon Control for Deadlock Prevention in Petri Nets

    Publication Year: 2008 , Page(s): 1337 - 1348
    Cited by:  Papers (60)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (551 KB) |  | HTML iconHTML  

    Deadlock prevention is a crucial step in the modeling of flexible manufacturing systems. In the Petri net framework, deadlock prevention policies based on siphon control are often employed, since it is easy to specify generalized mutual exclusion constraints that avoid the emptying of siphons. However, such policies may require an excessive computational load and result in impractical oversized control subnets. This is often a consequence of the redundancy in the control conditions derived from siphons. In this paper, a novel method is proposed that provides small size controllers, based on a set covering approach that conveniently relates siphons and markings. Some examples are provided to demonstrate the feasibility of the approach and to compare it with other methods proposed in the literature. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Logical System Representation of Images and Removal of Impulse Noise

    Publication Year: 2008 , Page(s): 1349 - 1362
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2802 KB) |  | HTML iconHTML  

    This paper presents a new concept of removing impulse noise through primary implicant elimination (PIE) applied to a logical system representation of the data. Applicable to binary and grayscale images, errors are corrected efficiently, in terms of the number of computations and memory requirements, while the fine details of the image are mostly preserved. Three filtering algorithms are presented: a general form in addition to iterative and switching variations. Experimental results on salt-and-pepper impulse noise, as well as on random-valued impulse noise, are compared against the performance of traditional median-based filters (both regular and switching) and are shown to be most successful in the often difficult case of when the original image contains many detailed patterns. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Risk-Based Stopping Criteria for Test Sequencing

    Publication Year: 2008 , Page(s): 1363 - 1373
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (513 KB) |  | HTML iconHTML  

    Testing complex manufacturing systems, like ASML lithographic machines, can take up to 45% of the total development time. The decision of when to stop testing is often difficult to make because less testing may leave critical faults in the system, while more testing increases time-to-market. In this paper, we solve the problem of deciding when to stop testing by introducing a test-sequencing method that incorporates several stopping criteria. These stopping criteria consist of objectives and constraints on the test cost and the remaining risk cost. For a given problem, a suitable stopping criterion can be chosen. For example, with the risk-based stopping criterion, testing stops when the test time or cost exceeds the risk cost. Furthermore, we show that it also is possible to model reliability problems with this test-sequencing method. The method is demonstrated on ASML systems with two case studies. The first case study was conducted in the test phase during the development of the software that is used to control an ASML lithographic machine. The second case study was conducted on the reliability testing of a lithographic machine. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automated Identification of Chromosome Segments Involved in Translocations by Combining Spectral Karyotyping and Banding Analysis

    Publication Year: 2008 , Page(s): 1374 - 1384
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1033 KB) |  | HTML iconHTML  

    The identification of chromosome abnormalities is an essential part of diagnosis and treatment of genetic disorders such as chromosomal syndromes and many types of cancer. Modern cytogenetic imaging techniques have improved the study of chromosome aberrations but they are most often used as adjuncts to traditional G-banded karyotype analysis. Molecular cytogenetic techniques such as comparative genomic hybridization, multicolor fluorescence in situ hybridization, and spectral karyotyping (SKY) are able to detect chromosome copy number changes and complex structural aberrations in cancers, particularly in hematological malignancies and solid tumors. However, banded chromosome analysis is essential to distinguish between normal and abnormal chromosomes. Currently available cytogenetic imaging software is designed to classify only normal chromosomes. The identification of the banded regions involved in the abnormal chromosomes is done manually. In this paper, we propose an algorithm to automate the banding analysis of abnormal chromosomes by comparing the information obtained by SKY for precise identification of translocation break points. Our algorithm is based on the dynamic time warping method in order to overcome the problems due to the nonrigid nature of chromosomes. The method has been implemented and successfully applied to detect chromosome translocations, deletions, and duplications in cell lines derived from solid tumors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Context-Dependent Algorithm for Merging Uncertain Information in Possibility Theory

    Publication Year: 2008 , Page(s): 1385 - 1397
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (294 KB) |  | HTML iconHTML  

    The need to merge multiple sources of uncertain information is an important issue in many application areas, particularly when there is potential for contradictions between sources. Possibility theory offers a flexible framework to represent, and reason with, uncertain information, and there is a range of merging operators, such as the conjunctive and disjunctive operators, for combining information. However, with the proposals to date, the context of the information to be merged is largely ignored during the process of selecting which merging operators to use. To address this shortcoming, in this paper, we propose an adaptive merging algorithm which selects largely partially maximal consistent subsets of sources, which can be merged through the relaxation of the conjunctive operator, by assessing the coherence of the information in each subset. In this way, a fusion process can integrate both conjunctive and disjunctive operators in a more flexible manner and thereby be more context dependent. A comparison with related merging methods shows how our algorithm can produce a more consensual result. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sensor Placement for Fault Diagnosis

    Publication Year: 2008 , Page(s): 1398 - 1410
    Cited by:  Papers (28)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (378 KB) |  | HTML iconHTML  

    An algorithm is developed for computing which sensors to add to meet a diagnosis requirement specification concerning fault detectability and fault isolability. The method is based only on the structural information in a model, which means that possibly large and nonlinear differential-algebraic models can be handled in an efficient manner. The approach is exemplified on a model of an industrial valve where the benefits and properties of the method are clearly shown. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Hybrid Particle Swarm Branch-and-Bound (HPB) Optimizer for Mixed Discrete Nonlinear Programming

    Publication Year: 2008 , Page(s): 1411 - 1424
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1076 KB) |  | HTML iconHTML  

    This paper proposes a new algorithm for solving mixed discrete nonlinear programming (MDNLP) problems, designed to efficiently combine particle swarm optimization (PSO), which is a well-known global optimization technique, and branch-and-bound (BB), which is a widely used systematic deterministic algorithm for solving discrete problems. The proposed algorithm combines the global but slow search of PSO with the rapid but local search capabilities of BB, to simultaneously achieve an improved optimization accuracy and a reduced requirement for computational resources. It is capable of handling arbitrary continuous and discrete constraints without the use of a penalty function, which is frequently cumbersome to parameterize. At the same time, it maintains a simple, generic, and easy-to-implement architecture, and it is based on the sequential quadratic programming for solving the NLP subproblems in the BB tree. The performance of the new hybrid PSO-BB architecture algorithm is evaluated against real-world MDNLP benchmark problems, and it is found to be highly competitive compared with existing algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Further Development of Input-to-State Stabilizing Control for Dynamic Neural Network Systems

    Publication Year: 2008 , Page(s): 1425 - 1433
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (938 KB) |  | HTML iconHTML  

    The authors present an approach for input-to-state stabilizing control of dynamic neural networks, which extends the existing result in the literature to a wider class of systems. This methodology is developed by using the Lyapunov technique, inverse optimality, and the Hamilton-Jacobi-Bellman equation. Depending on the dimensions of state and input, we construct two inverse optimal feedback laws to achieve the input-to-state stabilization of a wider class of dynamic neural network systems. With the help of the Sontag's formula, one of two control laws is developed from the creation of a scalar function to eliminate a restriction requiring the same number of states and inputs. In addition, the proposed designs achieve global asymptotic stability and global inverse optimality with respect to some meaningful cost functional. Numerical examples demonstrate the performance of the approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Connectionism, Controllers, and a Brain Theory

    Publication Year: 2008 , Page(s): 1434 - 1441
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (150 KB) |  | HTML iconHTML  

    This paper proposes a new theory for the internal mechanisms of the brain. It postulates that there are controllers in the brain and that there are parts of the brain that control other parts. Thus, the theory refutes the connectionist theory that there are no separate controllers in the brain for higher level functions and that all control is ldquolocal and distributedrdquo at the level of the cells. Connectionist algorithms themselves are used to prove this theory. Moreover, there is evidence in the neuroscience literature to support this theory. Thus, this paper proposes a control theoretic approach for understanding how the brain works and learns. That means that control theoretic principles should be applicable to developing systems similar to the brain. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Role Transfer Problems and Algorithms

    Publication Year: 2008 , Page(s): 1442 - 1450
    Cited by:  Papers (15)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (234 KB) |  | HTML iconHTML  

    Role transfer is a usual activity in an organization, particularly in a crisis situation. Role assignment and transfer regulations are important to accomplish it. This paper discusses the general role transfer problem; proposes a role specification mechanism; builds a set of terminologies for role transfer based on a revised environment-class, agent, role, group, and object model; and presents algorithms to validate role transfer while maintaining group viability. The contributions include formulating the problem of role transfer in a generalized form, developing a set of algorithms, and presenting a solution when group members are insufficient. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Access over 1 million articles - The IEEE Digital Library [advertisement]

    Publication Year: 2008 , Page(s): 1451
    Save to Project icon | Request Permissions | PDF file iconPDF (370 KB)  
    Freely Available from IEEE
  • 2009 ICMLC ICWAPR

    Publication Year: 2008 , Page(s): 1452
    Save to Project icon | Request Permissions | PDF file iconPDF (633 KB)  
    Freely Available from IEEE

Aims & Scope

The fields of systems engineering and human machine systems: systems engineering includes efforts that involve issue formulation, issue analysis and modeling, and decision making and issue interpretation at any of the lifecycle phases associated with the definition, development, and implementation of large systems.

 

This Transactions ceased production in 2012. The current retitled publication is IEEE Transactions on Systems, Man, and Cybernetics: Systems.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Dr. Witold Pedrycz
University of Alberta