Scheduled System Maintenance on December 17th, 2014:
IEEE Xplore will be upgraded between 2:00 and 5:00 PM EST (18:00 - 21:00) UTC. During this time there may be intermittent impact on performance. We apologize for any inconvenience.
By Topic

Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on

Issue 3 • Date May 2010

Filter Results

Displaying Results 1 - 24 of 24
  • Table of contents

    Page(s): C1 - 433
    Save to Project icon | Request Permissions | PDF file iconPDF (51 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Systems, Man, and Cybernetics—Part A: Systems and Humans publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (38 KB)  
    Freely Available from IEEE
  • Introduction to the Special Issue on Recent Advances in Biometrics

    Page(s): 434 - 436
    Save to Project icon | Request Permissions | PDF file iconPDF (74 KB)  
    Freely Available from IEEE
  • Adaptive Appearance Model and Condensation Algorithm for Robust Face Tracking

    Page(s): 437 - 448
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1053 KB) |  | HTML iconHTML  

    We present an adaptive framework for condensation algorithms in the context of human-face tracking. We attack the face tracking problem by making factored sampling more efficient and appearance update more effective. An adaptive affine cascade factored sampling strategy is introduced to sample the parameter space such that coarse face locations are located first, followed by a fine factored sampling with a small number of particles. In addition, the local linearity of an appearance manifold is used in conjunction with a new criterion to select a tangent plane for updating an appearance in face tracking. Our proposed method seeks the best linear variety from the selected tangent plane to form a reference image. We demonstrate the effectiveness and efficiency of the proposed method on a number of challenging videos. These test video sequences show that our method is robust to illumination, appearance, and pose changes, as well as temporary occlusions. Quantitatively, our method achieves the average root-mean-square error at 4.98 on the well-known dudek video sequence while maintaining a proficient speed at 8.74 fps. Finally, while our algorithm is adaptive during execution, no training is required. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Assessing the Uniqueness and Permanence of Facial Actions for Use in Biometric Applications

    Page(s): 449 - 460
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1930 KB) |  | HTML iconHTML  

    Although the human face is commonly used as a physiological biometric, very little work has been done to exploit the idiosyncrasies of facial motions for person identification. In this paper, we investigate the uniqueness and permanence of facial actions to determine whether these can be used as a behavioral biometric. Experiments are carried out using 3-D video data of participants performing a set of very short verbal and nonverbal facial actions. The data have been collected over long time intervals to assess the variability of the subjects' emotional and physical conditions. Quantitative evaluations are performed for both the identification and the verification problems; the results indicate that emotional expressions (e.g., smile and disgust) are not sufficiently reliable for identity recognition in real-life situations, whereas speech-related facial movements show promising potential. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tracking Vertex Flow and Model Adaptation for Three-Dimensional Spatiotemporal Face Analysis

    Page(s): 461 - 474
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1221 KB) |  | HTML iconHTML  

    Research in the areas of 3-D face recognition and 3-D facial expression analysis has intensified in recent years. However, most research has been focused on 3-D static data analysis. In this paper, we investigate the facial analysis problem using dynamic 3-D face model sequences. One of the major obstacles for analyzing such data is the lack of correspondences of features due to the variable number of vertices across individual models or 3-D model sequences. In this paper, we present an effective approach for establishing vertex correspondences using a tracking-model-based approach for vertex registration, coarse-to-fine model adaptation, and vertex motion trajectory (called vertex flow) estimation. We propose to establish correspondences across frame models based on a 2-D intermediary, which is generated using conformal mapping and a generic model adaptation algorithm. Based on our newly created 3-D dynamic face database, we also propose to use a spatiotemporal hidden Markov model (ST-HMM) that incorporates 3-D surface feature characterization to learn the spatial and temporal information of faces. The advantage of using 3-D dynamic data for face recognition has been evaluated by comparing our approach to three conventional approaches: 2-D-video-based temporal HMM model, conventional 2-D-texture-based approach (e.g., Gabor-wavelet-based approach), and static 3-D-model-based approaches. To further evaluate the usefulness of vertex flow and the adapted model, we have also applied a spatial-temporal face model descriptor for facial expression classification based on dynamic 3-D model sequences. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hand-Drawn Face Sketch Recognition by Humans and a PCA-Based Algorithm for Forensic Applications

    Page(s): 475 - 485
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (938 KB) |  | HTML iconHTML  

    Because face sketches represent the original faces in a very concise yet recognizable form, they play an important role in criminal investigations, human visual perception, and face biometrics. In this paper, we compared the performances of humans and a principle component analysis (PCA)-based algorithm in recognizing face sketches. A total of 250 sketches of 50 subjects were involved. All of the sketches were drawn manually by five artists (each artist drew 50 sketches, one for each subject). The experiments were carried out by matching sketches in a probe set to photographs in a gallery set. This study resulted in the following findings: 1) A large interartist variation in terms of sketch recognition rate was observed; 2) fusion of the sketches drawn by different artists significantly improved the recognition accuracy of both humans and the algorithm; 3) human performance seems mildly correlated to that of PCA algorithm; 4) humans performed better in recognizing the caricature-like sketches that show various degrees of geometrical distortion or deviation, given the particular data set used; 5) score level fusion with the sum rule worked well in combining sketches, at least for a small number of artists; and 6) the algorithm was superior with the sketches of less distinctive features, while humans seemed more efficient in handling tonality (or pigmentation) cues of the sketches that were not processed with advanced transformation functions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Toward Unconstrained Ear Recognition From Two-Dimensional Images

    Page(s): 486 - 494
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (798 KB) |  | HTML iconHTML  

    Ear recognition, as a biometric, has several advantages. In particular, ears can be measured remotely and are also relatively static in size and structure for each individual. Unfortunately, at present, good recognition rates require controlled conditions. For commercial use, these systems need to be much more robust. In particular, ears have to be recognized from different angles (poses), under different lighting conditions, and with different cameras. It must also be possible to distinguish ears from background clutter and identify them when partly occluded by hair, hats, or other objects. The purpose of this paper is to suggest how progress toward such robustness might be achieved through a technique that improves ear registration. The approach focuses on 2-D images, treating the ear as a planar surface that is registered to a gallery using a homography transform calculated from scale-invariant feature-transform feature matches. The feature matches reduce the gallery size and enable a precise ranking using a simple 2-D distance algorithm. Analysis on a range of data sets demonstrates the technique to be robust to background clutter, viewing angles up to ??13??, and up to 18% occlusion. In addition, recognition remains accurate with masked ear images as small as 20 ?? 35 pixels. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Extended-Depth-of-Field Iris Recognition Using Unrestored Wavefront-Coded Imagery

    Page(s): 495 - 508
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1470 KB) |  | HTML iconHTML  

    Iris recognition can offer high-accuracy person recognition, particularly when the acquired iris image is well focused. However, in some practical scenarios, user cooperation may not be sufficient to acquire iris images in focus; therefore, iris recognition using camera systems with a large depth of field is very desirable. One approach to achieve extended depth of field is to use a wavefront-coding system as proposed by Dowski and Cathey, which uses a phase modulation mask. The conventional approach when using a camera system with such a phase mask is to restore the raw images acquired from the camera before feeding them into the iris recognition module. In this paper, we investigate the feasibility of skipping the image restoration step with minimal degradation in recognition performance while still increasing the depth of field of the whole system compared to an imaging system without a phase mask. By using a simulated wavefront-coded imagery, we present the results of two different iris recognition algorithms, namely, Daugman's iriscode and correlation-filter-based iris recognition, using more than 1000 iris images taken from the iris challenge evaluation database. We carefully study the effect of an off-the-shelf phase mask on iris segmentation and iris matching, and finally, to better enable the use of unrestored wavefront-coded images, we design a custom phase mask by formulating an optimization problem. Our results suggest that, in exchange for some degradation in recognition performance at the best focus, we can increase the depth of field by a factor of about four (over a conventional camera system without a phase mask) by carefully designing the phase masks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Estimating and Fusing Quality Factors for Iris Biometric Images

    Page(s): 509 - 524
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1693 KB) |  | HTML iconHTML  

    Iris recognition, the ability to recognize and distinguish individuals by their iris pattern, is one of the most reliable biometrics in terms of recognition and identification performance. However, the performance of these systems is affected by poor-quality imaging. In this paper, we extend iris quality assessment research by analyzing the effect of various quality factors such as defocus blur, off-angle, occlusion/specular reflection, lighting, and iris resolution on the performance of a traditional iris recognition system. We further design a fully automated iris image quality evaluation block that estimates defocus blur, motion blur, off-angle, occlusion, lighting, specular reflection, and pixel counts. First, each factor is estimated individually, and then, the second step fuses the estimated factors by using a Dempster-Shafer theory approach to evidential reasoning. The designed block is evaluated on three data sets: Institute of Automation, Chinese Academy of Sciences (CASIA) 3.0 interval subset, West Virginia University (WVU) non-ideal iris, and Iris Challenge Evaluation (ICE) 1.0 dataset made available by National Institute for Standards and Technology (NIST). Considerable improvement in recognition performance is demonstrated when removing poor-quality images selected by our quality metric. The upper bound on computational complexity required to evaluate the quality of a single image is O(n2 log n). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cancelable Templates for Sequence-Based Biometrics with Application to On-line Signature Recognition

    Page(s): 525 - 538
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (543 KB) |  | HTML iconHTML  

    Recent years have seen the rapid spread of biometric technologies for automatic people recognition. However, security and privacy issues still represent the main obstacles for the deployment of biometric-based authentication systems. In this paper, we propose an approach, which we refer to as BioConvolving, that is able to guarantee security and renewability to biometric templates. Specifically, we introduce a set of noninvertible transformations, which can be applied to any biometrics whose template can be represented by a set of sequences, in order to generate multiple transformed versions of the template. Once the transformation is performed, retrieving the original data from the transformed template is computationally as hard as random guessing. As a proof of concept, the proposed approach is applied to an on-line signature recognition system, where a hidden Markov model-based matching strategy is employed. The performance of a protected on-line signature recognition system employing the proposed BioConvolving approach is evaluated, both in terms of authentication rates and renewability capacity, using the MCYT signature database. The reported extensive set of experiments shows that protected and renewable biometric templates can be properly generated and used for recognition, at the expense of a slight degradation in authentication performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Quality-Based Score Normalization With Device Qualitative Information for Multimodal Biometric Fusion

    Page(s): 539 - 554
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1464 KB) |  | HTML iconHTML  

    As biometric technology is rolled out on a larger scale, it will be a common scenario (known as cross-device matching) to have a template acquired by one biometric device used by another during testing. This requires a biometric system to work with different acquisition devices, an issue known as device interoperability. We further distinguish two subproblems, depending on whether the device identity is known or unknown. In the latter case, we show that the device information can be probabilistically inferred given quality measures (e.g., image resolution) derived from the raw biometric data. By keeping the template unchanged, cross-device matching can result in significant degradation in performance. We propose to minimize this degradation by using device-specific quality-dependent score normalization. In the context of fusion, after having normalized each device output independently, these outputs can be combined using the naive Bayes principal. We have compared and categorized several state-of-the-art quality-based score normalization procedures, depending on how the relationship between quality measures and score is modeled, as follows: 1) direct modeling; 2) modeling via the cluster index of quality measures; and 3) extending 2) to further include the device information (device-specific cluster index). Experimental results carried out on the Biosecure DS2 data set show that the last approach can reduce both false acceptance and false rejection rates simultaneously. Furthermore, the compounded effect of normalizing each system individually in multimodal fusion is a significant improvement in performance over the baseline fusion (without using any quality information) when the device information is given. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Binary Biometrics: An Analytic Framework to Estimate the Performance Curves Under Gaussian Assumption

    Page(s): 555 - 571
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2294 KB) |  | HTML iconHTML  

    In recent years, the protection of biometric data has gained increased interest from the scientific community. Methods such as the fuzzy commitment scheme, helper-data system, fuzzy extractors, fuzzy vault, and cancelable biometrics have been proposed for protecting biometric data. Most of these methods use cryptographic primitives or error-correcting codes (ECCs) and use a binary representation of the real-valued biometric data. Hence, the difference between two biometric samples is given by the Hamming distance (HD) or bit errors between the binary vectors obtained from the enrollment and verification phases, respectively. If the HD is smaller (larger) than the decision threshold, then the subject is accepted (rejected) as genuine. Because of the use of ECCs, this decision threshold is limited to the maximum error-correcting capacity of the code, consequently limiting the false rejection rate (FRR) and false acceptance rate tradeoff. A method to improve the FRR consists of using multiple biometric samples in either the enrollment or verification phase. The noise is suppressed, hence reducing the number of bit errors and decreasing the HD. In practice, the number of samples is empirically chosen without fully considering its fundamental impact. In this paper, we present a Gaussian analytical framework for estimating the performance of a binary biometric system given the number of samples being used in the enrollment and the verification phase. The error-detection tradeoff curve that combines the false acceptance and false rejection rates is estimated to assess the system performance. The analytic expressions are validated using the Face Recognition Grand Challenge v2 and Fingerprint Verification Competition 2000 biometric databases. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Large-Scale Public R&D Portfolio Selection by Maximizing a Biobjective Impact Measure

    Page(s): 572 - 582
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1014 KB)  

    This paper addresses R&D portfolio selection in social institutions, state-owned enterprises, and other nonprofit organizations which periodically launch a call for proposals and distribute funds among accepted projects. A nonlinear discontinuous bicriterion optimization model is developed in order to find a compromise between a portfolio quality measure and the number of projects selected for funding. This model is then transformed into a linear mixed-integer formulation to present the Pareto front. Numerical experiments with up to 25 000 projects competing for funding demonstrate a high computational efficiency of the proposed approach. The acceptance/rejection rules are obtained for a portfolio using the rough set methodology. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Competitive Capacity and Price Decisions for Two Build-to-Order Manufacturers Facing Time-Dependent Demands

    Page(s): 583 - 595
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (521 KB) |  | HTML iconHTML  

    This paper develops game-theoretic models to investigate the optimal competitive capacity-price decisions for two build-to-order manufacturers when they face the disruption of a random demand surge. Both manufacturers have their fixed capacity and pricing decisions for the low-demand period. When there is a sudden demand increase, they can temporally acquire extra capacity and change their pricing decisions. Our goal is to determine the optimal joint capacity and pricing decisions for both low- and high-demand periods. We show that there exists a unique subgame perfect Nash equilibrium that is affected by the distribution of the disrupted amount of demand, the duration of the demand change, the market scale, the unit production cost, and the subcontracting cost. The recommendations on how and when the manufacturers should strategically increase their profits by adjusting their capacities and prices are provided. We also find that the demand disruption largely influences the motivation of the manufacturers to acquire capacity information when the cost of acquiring capacity information is considered. The effects of capacity and pricing competition are investigated. Insights are generated, and future research directions are outlined. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Segmentation of Human Body Parts Using Deformable Triangulation

    Page(s): 596 - 610
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1690 KB) |  | HTML iconHTML  

    This paper presents a novel segmentation algorithm to segment a body posture into different body parts using the technique of deformable triangulation. To analyze each posture more accurately, they are segmented into triangular meshes, where a spanning tree can be found from the meshes using a depth-first search scheme. Then, we can decompose the tree into different subsegments, where each subsegment can be considered as a limb. Then, two hybrid methods (i.e., the skeleton-based and model-driven methods) are proposed for segmenting the posture into different body parts according to its occlusion conditions. To analyze occlusion conditions, a novel clustering scheme is proposed to cluster the training samples into a set of key postures. Then, a model space can be used to classify and segment each posture. If the input posture belongs to the nonocclusion category, the skeleton-based method is used to divide it into different body parts that can be refined using a set of Gaussian mixture models (GMMs). For the occlusion case, we propose a model-driven technique to select a good reference model for guiding the process of body part segmentation. However, if two postures' contours are similar, there will be some ambiguity that can lead to failure during the model selection process. Thus, this paper proposes a tree structure that uses a tracking technique so that the best model can be selected not only from the current frame but also from its previous frame. Then, a suitable GMM-based segmentation scheme can be used to finely segment a body posture into the different body parts. The experimental results show that the proposed method for body part segmentation is robust, accurate, and powerful. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modified Frequency-Partitioned Spectrum Estimation for a Wireless Health Advanced Monitoring Bio-Diagnosis System

    Page(s): 611 - 622
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2513 KB) |  | HTML iconHTML  

    This paper proposes a technique for frequency-partitioned spectrum estimation (FPSE), which is used in the National Taiwan University Wireless Health Advanced Monitoring Bio-Diagnosis System for electrocardiogram analysis. A process for analyzing the RR interval (which is a time series formed by the heat-beat duration that represents heart-rate variations) in conjunction with the fuzzy clustering technique is proposed for arrhythmia recognition. FPSE helps reduce data transmission errors and allows the computational load to be moved to a remote server; however, it suffers from waveform deterioration during reconstruction of the signal power spectrum. To compensate for this problem, this paper proposes a modified FPSE approach that imposes an additional boundary constraint to ensure that the estimated spectrum is smooth. The simulation results show that the proposed algorithm is more effective at recovering the original frequency information and achieves a globally asymptotic trend. The proposed arrhythmia recognition procedure was applied to the Massachusetts Institute of Technology-Boston's Beth Israel Hospital (MIT-BIH) database (developed by MIT and Boston's Beth Israel Deaconess Medical Center), which demonstrated that it is both very convenient and efficient. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Model-Based Development of Virtual Laboratories for Robotics Over the Internet

    Page(s): 623 - 634
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1225 KB) |  | HTML iconHTML  

    Extending technical education to students abroad requires the systematic development of virtual laboratories (VLs) that provide interaction with real and specialized equipment. This paper proposes a generic and modular model for VLs for robotics over the Internet. The model is defined by using Unified Modeling Language (UML) to depict its software structure and also Petri nets to describe its dynamic behavior. A development methodology uses the model as a reference framework. This proposed methodology, based on experiment specifications, customizes the framework in UML and formally translates its dynamic description, depicted by statecharts, into the Petri net formalism. Petri nets are used to analyze, control, and validate the VL dynamic design as a stable and event-synchronized telerobotic system. UML and Petri net charts obtained from the methodology supply a complete guideline for the developer to implement VLs for robotics. The model and its methodology are used to develop a remote VL for mobile robotics. This paper attempts to bridge the gap between ad hoc and formal implementation of VLs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Decision Strategies in Mediated Multiagent Negotiations: An Optimization Approach

    Page(s): 635 - 640
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (368 KB) |  | HTML iconHTML  

    We consider a problem of mediated group decision making where a number of agents provide a preference function over a set of alternatives. Then, using such information, a new agent provides its own preferences, and, after that, a mediation step is applied to aggregate the individual preferences in order to obtain a group-preference function. Finally, the most supported alternative is selected. Two key aspects are that the preference functions of the former agents may or may not have uncertainty, and that the mediation process rewards those agents that are open to other alternatives besides their most preferred ones. The question for the new agent is how to score its alternatives in such a way that its most preferred one gets the biggest group support. We propose to define such scoring or preference function as the solution of a nonlinear optimization problem. The model also takes into account that imprecision could exist in the preference functions. Through extensive simulations (varying the number of agents, alternatives, etc.), we conclude that the proposal is feasible and effective. Additionally, the usefulness of the mediation process rewarding openness is empirically confirmed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Modified Fuzzy Min–Max Neural Network With a Genetic-Algorithm-Based Rule Extractor for Pattern Classification

    Page(s): 641 - 650
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (340 KB) |  | HTML iconHTML  

    In this paper, a two-stage pattern classification and rule extraction system is proposed. The first stage consists of a modified fuzzy min-max (FMM) neural-network-based pattern classifier, while the second stage consists of a genetic-algorithm (GA)-based rule extractor. Fuzzy if-then rules are extracted from the modified FMM classifier, and a ??don't care?? approach is adopted by the GA rule extractor to minimize the number of features in the extracted rules. Five benchmark problems and a real medical diagnosis task are used to empirically evaluate the effectiveness of the proposed FMM-GA system. The results are analyzed and compared with other published results. In addition, the bootstrap hypothesis analysis is conducted to quantify the results of the medical diagnosis task statistically. The outcomes reveal the efficacy of FMM-GA in extracting a set of compact and yet easily comprehensible rules while maintaining a high classification performance for tackling pattern classification tasks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Guo and Nixon's Criterion for Feature Subset Selection: Assumptions, Implications, and Alternative Options

    Page(s): 651 - 655
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (139 KB) |  | HTML iconHTML  

    Guo and Nixon proposed a feature selection method based on maximizing I( x;Y), the multidimensional mutual information between feature vector x and class variable Y. Because computing I(x;Y) can be difficult in practice, Guo and Nixon proposed an approximation of I(x;Y) as the criterion for feature selection. We show that Guo and Nixon's criterion originates from approximating the joint probability distributions in I(x;Y) by second-order product distributions. We remark on the limitations of the approximation and discuss computationally attractive alternatives to compute I(x;Y) . View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Special issue on engineering applications of memetic computing

    Page(s): 656
    Save to Project icon | Request Permissions | PDF file iconPDF (177 KB)  
    Freely Available from IEEE
  • IEEE Systems, Man, and Cybernetics Society Information

    Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (29 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Systems, Man, and Cybernetics—Part A: Systems and Humans Information for authors

    Page(s): C4
    Save to Project icon | Request Permissions | PDF file iconPDF (35 KB)  
    Freely Available from IEEE

Aims & Scope

The fields of systems engineering and human machine systems: systems engineering includes efforts that involve issue formulation, issue analysis and modeling, and decision making and issue interpretation at any of the lifecycle phases associated with the definition, development, and implementation of large systems.

 

This Transactions ceased production in 2012. The current retitled publication is IEEE Transactions on Systems, Man, and Cybernetics: Systems.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Dr. Witold Pedrycz
University of Alberta