By Topic

Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on

Issue 6 • Date Nov. 2012

Filter Results

Displaying Results 1 - 25 of 29
  • Table of contents

    Page(s): C1 - 1309
    Save to Project icon | Request Permissions | PDF file iconPDF (48 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Systems, Man, and Cybernetics—Part A: Systems and Humans publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (137 KB)  
    Freely Available from IEEE
  • Guest Editorial on Health-Care Management and Optimization

    Page(s): 1310 - 1313
    Save to Project icon | Request Permissions | PDF file iconPDF (169 KB)  
    Freely Available from IEEE
  • Reducing Length of Stay in Emergency Department: A Simulation Study at a Community Hospital

    Page(s): 1314 - 1322
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (922 KB) |  | HTML iconHTML  

    In this paper, a simulation model of an emergency department (ED) at a large community hospital, Central Baptist Hospital in Lexington, KY, is developed. Using such a model, we can accurately emulate the patient flow in the ED and carry out sensitivity analysis to determine the most critical process for improvement in quality of care (in terms of patient length of stay). In addition, a what-if analysis is performed to investigate the potential change in operation policies and its impact. Floating nurse, combining registration with triage, mandatory requirement of physician's visit within 30 min, and simultaneous reduction of operation times of some most sensitive procedures can all result in substantial improvement. These recommendations have been submitted to the hospital leadership, and implementations are in progress. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Emergency Healthcare Workflow Modeling and Timeliness Analysis

    Page(s): 1323 - 1331
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (190 KB) |  | HTML iconHTML  

    A health emergency is a situation that poses an immediate risk to health and life and requires urgent intervention to prevent its worsening. Emergency healthcare service is a real-time service, where timeliness is critical to mission success. Workflow management technology has received considerable attention in the healthcare field in recent years for the automation of both intra- and interorganizational healthcare processes. However, no work on timeliness analysis has been reported. In our previous work, we proposed Workflows Intuitive and Formal Approach (WIFA) formalism for emergency response workflow modeling. In this paper, we extend our WIFA formalism to take task execution times into account to support emergency response timeliness analysis. An example of emergency healthcare shows how the timed WIFA workflow model works. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Intelligent Patient Management and Resource Planning for Complex, Heterogeneous, and Stochastic Healthcare Systems

    Page(s): 1332 - 1345
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (912 KB) |  | HTML iconHTML  

    Effective resource requirement forecasting is necessary to reduce the escalating cost of care by ensuring optimum utilization and availability of scarce health resources. Patient hospital length of stay (LOS) and thus resource requirements depend on many factors including covariates representing patient characteristics such as age, gender, and diagnosis. We therefore propose the use of such covariates for better hospital capacity planning. Likewise, estimation of the patient's expected destination after discharge will help in allocating scarce community resources. Also, probable discharge destination may well affect a patient's LOS in hospital. For instance, it might be required to delay the discharge of a patient so as to make appropriate care provision in the community. A number of deterministic models such as ratio-based methods have failed to address inherent variability in complex health processes. To address such complexity, various stochastic models have therefore been proposed. However, such models fail to consider inherent heterogeneity in patient behavior. Therefore, we here use a phase-type survival tree for groups of patients that are homogeneous with respect to LOS distribution, on the basis of covariates such as time of admission, gender, and disease diagnosed; these homogeneous groups of patients can then model patient flow through a care system following stochastic pathways that are characterized by the covariates. Our phase-type model is then extended by further growing the survival tree based on covariates representing outcome measures such as treatment outcome or discharge destinations. These extended phase-type survival trees are very effective in modeling interrelationship between a patient's LOS and such outcome measures and allow us to describe patient movements through an integrated care system including hospital, social, and community components. In this paper, we first propose a generalization of the Coxian phase-type distribution to a - arkov process with more than one absorbing state; we call this the multi-absorbing state phase-type distribution. We then describe how the model can be used with the extended phase-type survival tree for forecasting hospital, social, and community care resource requirements, estimating cost of care, predicting patient demography at a given time in the future, and admission scheduling. We can, thus, provide a stochastic approach to capacity planning across complex heterogeneous care systems. The approach is illustrated using a five year retrospective data of patients admitted to the stroke unit of the Belfast City Hospital. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Operations Management Applied to Home Care Services: The Problem of Assigning Human Resources to Patients

    Page(s): 1346 - 1363
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (831 KB) |  | HTML iconHTML  

    In recent years, home care (HC) service systems have been developed as alternatives to conventional hospitalization. Many resources are involved in delivering HC service, including different categories of human resources, support staff, and material resources. One of the main issues encountered while planning human HC resources is the patient assignment problem, i.e., deciding which operator(s) will take care of which admitted patient given some sets of constraints (e.g., the continuity of care). This paper addresses the resource assignment problem for HC systems. A set of mathematical programming models to balance the workloads of the operators within specific categories are proposed. The models consider several peculiarities of HC services, such as the continuity of care constraint, operators' skills, and the geographical areas which patients and operators belong to. Given the high variability of patient demands, models are developed under the assumption that patients' demands are either deterministic or stochastic. The analysis of the results obtained from a real case study demonstrates the applicability of the proposed models as well as the benefits that stem from applying them. Moreover, the obtained results show that an acceptable level of continuity of care cannot be obtained without modeling the continuity of care as a hard constraint. The analysis under continuity of care also shows the high value of information and the difficulties of fully balancing workloads with the application of standard techniques. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enabling Primary and Specialist Care Interoperability Through HL7 CDA Release 2 and the Chronic Care Model: An Italian Case Study

    Page(s): 1364 - 1384
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2117 KB) |  | HTML iconHTML  

    Interoperability is the key to enable clinical information systems for General Practitioners (GP) and Hospital Specialists (HS) in order to exchange and manage the Chronic Care Models (CCM) medical records, Patient Summary (PS), and Electronic Prescription (e-Prescription) documents while accessing the electronic health record. We present a localization experience for PS and e-prescription, based on the Health Level Seven Version 3 Clinical Document Architecture Release 2, developed for Italian healthcare. We describe also an experience on the implementation of CCM for sharing patient clinical data among healthcare providers in the management of diagnostic and therapeutic pathways for chronic diseases (diabetes). Finally, we propose, as a case study, a project for the integration of various services for GP/HS, in line with the context of Italian normative both at the national and regional levels (Tuscany region). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Health Care Improvement: Comparative Analysis of Two CAD Systems in Mammographic Screening

    Page(s): 1385 - 1395
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (367 KB) |  | HTML iconHTML  

    Technological innovations have produced remarkable results in the health care sector. In particular, computer-aided detection (CAD) systems are becoming very useful and helpful in supporting physicians for early detection and control of some diseases such as neoplastic pathologies. In this paper, two different CAD systems able to detect and to localize microcalcification clusters in mammographic images are implemented. The two methods utilize an artificial neural network and a support vector machine, respectively, as classifier. Adopting the MIAS database as procedure testing, the performance of the two implemented systems are compared in terms of sensitivity, specificity, accuracy, free-response operating characteristic curves, and Cohen's kappa coefficient. The obtained values for the previous parameters show the efficiency of both methods to operate as “second opinion” in microcalcification cluster detection, improving the screening process efficiency. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Population-Based Simulation for Public Health: Generic Software Infrastructure and Its Application to Osteoporosis

    Page(s): 1396 - 1409
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1066 KB) |  | HTML iconHTML  

    Policy-making in public health has great socio-economical consequences and must be done using the best available knowledge on the possible options. These processes are often too complex to be evaluated through analytical methods, such that computer simulations are often the best way to produce quantitative evaluations of their performances. For that purpose, we are proposing a complete software infrastructure for the simulation of public health processes. This software stack includes a generic population-based simulator called SynCHroNous Agent- and Population-based Simulator, which has a modern object-oriented software architecture, and is completely configured through eXtensible Markup Language files. These configuration files can themselves be produced by a graphical user interface that allows modeling of public health simulation by nonprogrammers. This software infrastructure has been illustrated with the real-life case study of osteoporosis prevention in adult women populations. This example, which is of great interest for Quebec health decision makers, provides insightful results for comparing several prevention strategies on a realistic population. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Information Technology as Tools for Cancer Registry and Regional Cancer Network Integration

    Page(s): 1410 - 1424
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1486 KB) |  | HTML iconHTML  

    Background. With the publication of large studies from different health systems comparing survival probabilities, cancer registries are increasingly involved in clinical evaluation research. The changing role of registries strictly depends on the integration between the oncology system and proper information technology (IT) tools. IT is fundamental to improving validity and timeliness of data diffusion when both the number of sources linked and the number of variables registered are on the rise. Aims. In this paper, we present a modern web-based management system that allows to integrate different sources, validate and elaborate data thus providing a new evaluation system for the oncology network based on cancer registries. Materials and methods. We developed a Web 2.0 management system for the Umbria Cancer Registry (S.G.RTUP) based on AMPAX technology (Apache, Mysql, PHP, Ajax and XML) and object-oriented programming. ISO/IEC 27001:2005 standard is followed to ensure security access to the information. The S.G.RTUP architecture is modular and extensible and information consistency is guaranteed by entity-relationship principles. Cancer sites, topology, morphology, and behavior are coded according to the International Classification of Diseases. Classical epidemiological indices for a cancer registry are implemented: incidence, mortality, years of potential life lost, and cumulative risk. S.G.RTUP has tools to prepare data for trend analysis and relative survival analysis. Geographical analysis is also implemented. Results. S.G.RTUP is integrated with the Oncology Network and gives timely epidemiological indices for evaluation of oncological activities. The registration system that we developed can effectively manage different data sources. Automatic importing of routinely available data from pathology archives, screening services, and hospital discharge records will reduce the time needed to produce data and will also allow the expansion of registered information- Several services for data visualization and statistical analysis are implemented. A geographic information system based on Google maps API is used for geolocalization of cases and map plotting of incidence and mortality rates. We implemented Besag York and Mollie's algorithm for real-time smoothed maps. All services can be dynamically performed over a subset of data that the user can select through an innovative filtering system. Discussion and conclusion. IT contributed to shortening all phases of cancer registration, including linkage with external sources, coding, quality control, data management and analysis and publication of results. Integration in the oncology network and secure Web access allowed us to design with clinicians innovative population-based collaborative studies. Our geographic analysis system enables us to develop sophisticated dynamic geostatistic tools. View full abstract»

    Open Access
  • The Development of an Agent-Based Modeling Framework for Simulating Engineering Team Work

    Page(s): 1425 - 1439
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (564 KB) |  | HTML iconHTML  

    Team working is becoming increasingly important in modern organizations due to its beneficial outcomes. A team's performance levels are determined by complex interactions between the attributes of its individual members, the communication and dynamics between members, the working environment, and the team's work tasks. As organizations evolve, so too does the nature of team working. During the past two decades, product development in engineering organizations has increasingly been undertaken by multidisciplinary integrated product teams. Such increasing complexity means that the nature of research methods for studying teams must also evolve. Accordingly, this paper proposes an agent-based modeling approach for simulating team working within an engineering environment, informed by research conducted in two engineering organizations. The model includes a number of variables at an individual level (competency, motivation, availability, response rate), team level (communication, shared mental models, trust), and task level (difficulty, workflow), which jointly determine team performance (quality, time to complete the task, time spent working on the task). In addition to describing the model's development, the paper also reports the results of various simulation runs that were conducted in response to realistic team working scenarios, together with its validation. Finally, the paper discusses the model's practical applications as a tool for facilitating organizational decision making with respect to optimizing team working. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Modified Car-Following Model Based on a Neural Network Model of the Human Driver Effects

    Page(s): 1440 - 1449
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1180 KB) |  | HTML iconHTML  

    Nowadays, among the microscopic traffic flow modeling approaches, the car-following models are increasingly used by transportation experts to utilize appropriate intelligent transportation systems. Unlike previous works, where the reaction delay is considered to be fixed, in this paper, a modified neural network approach is proposed to simulate and predict the car-following behavior based on the instantaneous reaction delay of the driver-vehicle unit as the human effects. This reaction delay is calculated based on a proposed idea, and the model is developed based on this feature as an input. In this modeling, the inputs and outputs are chosen with respect to the reaction delay to train the neural network model. Using the field data, the performance of the model is calculated and compared with the responses of some existing neural network car-following models. Considering the difference between the responses of the actual plant and the predicted model as the error, comparison shows that the error in the proposed model is significantly smaller than that that in the other models. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sleep-Stage Decision Algorithm by Using Heartbeat and Body-Movement Signals

    Page(s): 1450 - 1459
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1817 KB) |  | HTML iconHTML  

    This paper describes a noninvasive algorithm to estimate the sleep stages used in the Rechtschaffen and Kales method (R-K method). The heartbeat and body-movement signals measured by the noninvasive pneumatic method are used to estimate the sleep stages instead of using the Eletroencephalogram and Electromyography in the R-K method. From the noninvasive measurements, we defined two indices that indicate the condition of REM sleep and the sleep depth. Functions to obtain the incidence ratio and the standard deviation of the extracted elements for each sleep stage were also determined, for each age group of the subjects. Using these indices and functions, an algorithm to classify the subjects' sleep stages was proposed. The mean agreement ratios between the sleep stages' data obtained from the proposed method and those from the de facto standard R-K method, for the stages categorized into six, five, and three, were 51.6%, 56.2%, and 77.5%, and their corresponding mean values of kappa statistics were 0.29, 0.39, and 0.48, respectively. The proposed method shows closer agreement with the result of R-K method than the similar noninvasive method presented earlier. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Perceiving for Acting With Teleoperated Robots: Ecological Principles to Human–Robot Interaction Design

    Page(s): 1460 - 1475
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (761 KB) |  | HTML iconHTML  

    By primarily focusing on the perceptual information available to an organism and by adopting a functional perspective, the ecological approach to perception and action provides a unique theoretical basis for addressing the remote perception problem raised by telerobotics. After clarifying some necessary concepts of this approach, we first detail some of the major implications of an ecological perspective to robot teleoperation. Based on these, we then propose a framework for coping with the alteration of the information available to the operator. While our proposal shares much with previous works that applied ecological principles to the design of man-machine interfaces (e.g., ecological interface design), it puts a special emphasis on the control of action (instead of process) which is central to teleoperation but has been seldom addressed in the literature. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Universal Generating Function Method for Solving the Single (d, \tau ) -Quick-Path Problem in Multistate Flow Networks

    Page(s): 1476 - 1484
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (264 KB) |  | HTML iconHTML  

    Many real-world multistate systems, including distribution systems and supply chain management systems, can be modeled as multistate flow networks (MFNs). A single quick-path MFN (QMFN) is a special MFN with two characteristics-bandwidth and lead time-in each arc. A (d, τ)-quick path (i.e., (d, τ)-QP) is also a special minimal path (MP) such that d units of data can be sent from the source node to the sink node within τ units of time. The associated QMFN reliability problem evaluates the probability that a (d, τ)-QP exists in a QMFN. All known algorithms for this reliability problem require the advance determination of all MPs, which is an NP-hard problem. A very straightforward and easily programmed algorithm derived from the universal generating function method (UGFM) is suggested to find all (d, τ)-QPs prior to calculating the QMFN reliability, without the necessity of all MPs being known in advance. The correctness of the proposed UGFM is proven, and an analysis of its computational complexity indicates that it is more efficient than known algorithms. An example is provided by way of illustration. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • TAIEX Forecasting Using Fuzzy Time Series and Automatically Generated Weights of Multiple Factors

    Page(s): 1485 - 1495
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (366 KB) |  | HTML iconHTML  

    In this paper, we present a new method to forecast the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) using fuzzy time series and automatically generated weights of multiple factors. The proposed method uses the variation magnitudes of adjacent historical data to generate fuzzy variation groups of the main factor (i.e., the TAIEX) and the elementary secondary factors (i.e., the Dow Jones, the NASDAQ and the M1B), respectively. Based on the variation magnitudes of the main factor TAIEX and the elementary secondary factors of a particular trading day, it constructs the occurrence vector of the main factor and the occurrence vectors of the elementary secondary factors on the trading day, respectively. By calculating the correlation coefficients between the numerical data series of the main factor and the numerical data series of each elementary secondary factor, respectively, it calculates the relevance degree between the forecasted variation of the main factor and the forecasted variation of each elementary secondary factor. Based on the correlation coefficients between the numerical data series of the main factor and the numerical data series of each elementary secondary factor on a trading day, it automatically generates the weights of the occurrence vector of the main factor and the occurrence vector of each elementary secondary factor on the trading day, respectively. Then, it calculates the forecasted variation of the main factor and the forecasted variation of each elementary secondary factor on the trading day, respectively, to obtain the final forecasted variation on the trading day. Finally, based on the closing index of the TAIEX on the trading day and the final forecasted variation on the trading day, it generates the forecasted value of the next trading day. The experimental results show that the proposed method outperforms the existing methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using Machine Vision and Hand-Motion Control to Improve Crane Operator Performance

    Page(s): 1496 - 1503
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1078 KB) |  | HTML iconHTML  

    The payload oscillation inherent to all cranes makes it challenging for human operators to manipulate payloads quickly, accurately, and safely. Manipulation difficulty is also increased by nonintuitive crane-control interfaces. This paper describes a new interface that allows operators to drive a crane by moving a hand-held device (wand or glove) freely in space. A crane-mounted camera tracks the movement of the hand-held device, the position of which is used to drive the crane. Two control architectures were investigated. The first uses a simple feedback controller, and the second uses feedback and an input shaper. Two operator studies demonstrate that hand-motion crane control is faster and safer than using a standard push-button pendent control. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Risk and Safety Program Performance Evaluation and Business Process Modeling

    Page(s): 1504 - 1513
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (663 KB)  

    There is increasing need for agencies to coordinate their interdependent risk assessment, risk management, and risk communication activities in compliance with risk program guidelines. In particular, there is a challenge to measure risk program compliance and maturity to guidelines such as the U.S. Office of Management and Budget (OMB) memorandum “Updated Principles for Risk Analysis” among others. This paper demonstrates a systemic approach to evaluate large-scale risk program maturity with utilization of business process modeling and self-assessment methods. This approach will be helpful to agencies implementing risk guidelines such as those of the OMB, the U.S. Government Accountability Office, the U.S. Department of Homeland Security, the U.S. Department of Defense, and others. This paper will be of interest to risk managers, agencies, and risk and safety analysts engaged in the conception, implementation, and evaluation of risk and safety programs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Method for Preliminary Identification of Gene Regulatory Networks from Gene Microarray Cancer Data Using Ridge Partial Least Squares With Recursive Feature Elimination and Novel Brier and Occurrence Probability Measures

    Page(s): 1514 - 1528
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (976 KB) |  | HTML iconHTML  

    This paper proposes a new method for preliminary identification of gene regulatory networks (GRNs) from gene microarray cancer databased on ridge partial least squares (RPLS) with recursive feature elimination (RFE) and novel Brier and occurrence probability measures. It facilitates the preliminary identification of meaningful pathways and genes for a specific disease, rather than focusing on selecting a small set of genes for classification purposes as in conventional studies. First, RFE and a novel Brier error measure are incorporated in RPLS to reduce the estimation variance using a two-nested cross validation (CV) approach. Second, novel Brier and occurrence probability-based measures are employed in ranking genes across different CV subsamples. It helps to detect different GRNs from correlated genes which consistently appear in the ranking lists. Therefore, unlike most conventional approaches that emphasize the best classification using a small gene set, the proposed approach is able to simultaneously offer good classification accuracy and identify a more comprehensive set of genes and their associated GRNs. Experimental results on the analysis of three publicly available cancer data sets, namely leukemia, colon, and prostate, show that very stable gene sets from different but relevant GRNs can be identified, and most of them are found to be of biological significance according to previous findings in biological experiments. These suggest that the proposed approach may serve as a useful tool for preliminary identification of genes and their associated GRNs of a particular disease for further biological studies using microarray or similar data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling Human Recursive Reasoning Using Empirically Informed Interactive Partially Observable Markov Decision Processes

    Page(s): 1529 - 1542
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (946 KB) |  | HTML iconHTML  

    Recursive reasoning of the form what do I think that you think that I think (and so on) arises often while acting in multiagent settings. Previously, multiple experiments studied the level of recursive reasoning generally displayed by humans while playing sequential general-sum and fixed-sum, two-player games. The results show that subjects experiencing a general-sum strategic game display first or second level of recursive thinking with the first level being more prominent. However, if the game is made simpler and more competitive with fixed-sum payoffs, subjects predominantly attributed first-level recursive thinking to opponents thereby acting using second level. In this article, we model the behavioral data obtained from the studies using the interactive partially observable Markov decision process, appropriately simplified and augmented with well-known models simulating human learning and decision. We experiment with data collected at different points in the study to learn the models parameters. Accuracy of the predictions by our models is evaluated by comparing them with the observed study data, and the significance of the fit is demonstrated by comparing the mean squared error of our model predictions with those of a random hypothesis. Accuracy of the predictions by the models suggest that these could be viable ways for computationally modeling strategic behavioral data in a general way. While we do not claim the cognitive plausibility of the models in the absence of more evidence, they represent promising steps toward understanding and computationally simulating strategic human behavior. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Phase Constancy in a Ladder Model of Neural Dynamics

    Page(s): 1543 - 1551
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (659 KB) |  | HTML iconHTML  

    This paper presents a novel concept of modeling biological systems by means of preserving the natural rules governing the system's dynamics, i.e., their intrinsic fractal (recurrent) structure. The purpose of this paper is to illustrate the capability of recurrent ladder networks to capture the intrinsic recurrent anatomy of neural networks and to provide a dynamic model which shows typical neuronal phenomena, such as the phase constancy. As an illustrating example, the simplified model for a neural network consisting of motor neurons is used in simulation of a recurrent ladder network. Starting from a generalized approach, it is shown that, in the steady state, the result converges to a constant-phase behavior. The outcome of this paper indicates that the proposed model is a suitable tool for specific neural models in various neuroscience applications, being able to capture their fractal structure and the corresponding fractal dynamic behavior. A link to the dynamics of EEG activity is suggested. By studying specific neural populations by means of the ladder network model presented in this paper, one might be able to understand the changes observed in the EEG with normal aging or with neurodegenerative disorders. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Formulation of Reduced-Taskload Optimization Models for Conflict Resolution

    Page(s): 1552 - 1561
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (839 KB) |  | HTML iconHTML  

    This paper explores methods to include aspects of controller taskload into conflict-resolution programs through a parametric approach. We are motivated by the desire to create conflict-resolution decision-support tools that operate within a human-in-the-loop control architecture by actively accounting for, and moderating, controller taskload. Specifically, we introduce two conflict-resolution programs with the objective of managing controller conflict-resolution taskload, i.e., the number of maneuvers used to separate air traffic. Managing conflict-resolution taskload is accomplished by penalizing aircraft maneuvers through their L1 norm in the cost function or constraining the number of maneuvers directly. Analysis of the programs reveals that both approaches are successful at managing controller conflict-resolution taskload and minimizing fuel burn. Directly constraining conflict-resolution taskload is more successful at minimizing the variation in the number of aircraft maneuvers issued and returning the aircraft to their desired exit point. Penalizing maneuvers through L1 norm costs is more successful at reducing controller conflict-resolution taskload at lower traffic volumes. Ultimately, results demonstrate that the inclusion of such parametric models can successfully regulate controller conflict-resolution taskload. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Investigating Human Performance in a Virtual Reality Haptic Simulator as Influenced by Fidelity and System Latency

    Page(s): 1562 - 1566
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (193 KB) |  | HTML iconHTML  

    The objective of this study was to demonstrate the utility of an established model of human motor behavior for assessing the fidelity of a virtual reality (VR) and haptic-based simulation for fine motor task performance. This study was also to serve as a basis for formulating general performance-based simulator-design guidelines toward balancing perceived realism with simulator limitations, such as latency resulting from graphic and haptic renderings. A low-fidelity surgical simulator was developed as an example VR for study, and user performance was tested in a simplified tissue-cutting task using a virtual scalpel. The observed aspect of the simulation included a discrete-movement task under different system-lag conditions and settings of task difficulty. Results revealed user performance in the VR to conform with Fitts' law of motor behavior and for performance to degrade with increasing task difficulty and system time lag. In general, the findings of this work support predictions on human performance under various simulator-design conditions using an established model of motor-control behavior and formulation of human-performance-based simulator-design principles. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Comparative Study of Pitch-Based Gestures in Nonverbal Vocal Interaction

    Page(s): 1567 - 1571
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (183 KB) |  | HTML iconHTML  

    Nonverbal vocal interaction (NVVI) is an input modality by means of which users control the computer by producing sounds other than speech. Previous research in this field has focused mainly on studying isolated instances of NVVI (such as mouse cursor control in computer games) and their performance. This paper presents a study with 36 elderly users in which basic NVVI vocal gestures (commands) were ranked by their perceived fatigue, satisfaction, and efficiency. The results of this study inspired a set of NVVI gesture design guidelines that are also presented in this paper. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The fields of systems engineering and human machine systems: systems engineering includes efforts that involve issue formulation, issue analysis and modeling, and decision making and issue interpretation at any of the lifecycle phases associated with the definition, development, and implementation of large systems.

 

This Transactions ceased production in 2012. The current retitled publication is IEEE Transactions on Systems, Man, and Cybernetics: Systems.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Dr. Witold Pedrycz
University of Alberta