By Topic

Systems, Man, and Cybernetics: Systems, IEEE Transactions on

Issue 3 • Date May 2013

Filter Results

Displaying Results 1 - 25 of 25
  • Table of contents

    Publication Year: 2013 , Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (44 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Systems, Man, and Cybernetics publication information

    Publication Year: 2013 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (135 KB)  
    Freely Available from IEEE
  • Vulnerability of Smart Grids With Variable Generation and Consumption: A System of Systems Perspective

    Publication Year: 2013 , Page(s): 477 - 487
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (431 KB) |  | HTML iconHTML  

    This paper looks into the vulnerabilities of the electric power grid and associated communication network, in the face of intermittent power generation and uncertain demand within a complex network framework of analysis of smart grids. The perspective is typical for the system of systems analysis of interdependencies in a critical infrastructure (CI), i.e., the smart grid for electricity distribution. We assess how the integration of the two systems copes with requests to increase power generation due to enhanced power consumption at a load bus. We define adequate measures of vulnerability to identify the most limiting communication time delays. We quantify the probability that a reduction in the functionality of the communication system yields a faulty condition in the electric power grid, and find that a factual indicator to quantify the coupling strength between the two networks is the frequency of load-shedding actions due to excessive communication time delay. We evaluate safety margins with respect to communication specifications, i.e., the data rate of the network, to comply with the safety requirements in the electric power grid. Finally, we find a catastrophic phase transition with respect to this parameter, which affects the safe operation of the CI. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using Formal Verification to Evaluate Human-Automation Interaction: A Review

    Publication Year: 2013 , Page(s): 488 - 503
    Cited by:  Papers (6)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (394 KB) |  | HTML iconHTML  

    Failures in complex systems controlled by human operators can be difficult to anticipate because of unexpected interactions between the elements that compose the system, including human-automation interaction (HAI). HAI analyses would benefit from techniques that support investigating the possible combinations of system conditions and HAIs that might result in failures. Formal verification is a powerful technique used to mathematically prove that an appropriately scaled model of a system does or does not exhibit desirable properties. This paper discusses how formal verification has been used to evaluate HAI. It has been used to evaluate human-automation interfaces for usability properties and to find potential mode confusion. It has also been used to evaluate system safety properties in light of formally modeled task analytic human behavior. While capable of providing insights into problems associated with HAI, formal verification does not scale as well as other techniques such as simulation. However, advances in formal verification continue to address this problem, and approaches that allow it to complement more traditional analysis methods can potentially avoid this limitation. View full abstract»

    Open Access
  • LITHE: An Agile Methodology for Human-Centric Model-Based Systems Engineering

    Publication Year: 2013 , Page(s): 504 - 521
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3628 KB) |  | HTML iconHTML  

    This paper proposes an agile model-based systems engineering (SE) methodology (LITHE methodology) to engineer the contemporary large, complex, and interdisciplinary systems of systems. The methodology relates the processes, the methods, and the tools in order to support an effective model-based development context. The LITHE uses a universal and intuitive SE base process, reducing the complexity and intricacy of the base methods, emphasizing the agile principles such as continuous communication, feedback and stakeholders' involvement, short iterations, and rapid response, and rousing the utilization of a coherent system model developed through the benchmark systems graphical modeling languages. Aiming to support the development of successful systems, which satisfy the stakeholders' expectations, the methodology is particularly concerned with human systems integration, so the related fundamental aspects are considered throughout the engineering process. The LITHE methodology also includes a supporting graphical tool that aims to be an agile instrument to be used by systems engineers in a model-based development environment. To illustrate the effectiveness of the proposed methodology and to provide some validation, an empirical case study related with the development of a real large and complex system (the Guiding Urban Intelligent Traffic and Environment system) is also described. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Coupled Factorial Hidden Markov Models (CFHMM) for Diagnosing Multiple and Coupled Faults

    Publication Year: 2013 , Page(s): 522 - 534
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (466 KB) |  | HTML iconHTML  

    In this paper, we formulate a coupled factorial hidden Markov model-based (CFHMM) framework to diagnose dependent faults occurring over time (dynamic case). In our previous research, the problem of diagnosing multiple faults over time (dynamic multiple fault diagnosis (DMFD)) is solved based on a sequence of test outcomes by assuming that the faults and their time evolution are independent. This problem is NP-hard, and, consequently, we developed a polynomial approximation algorithm using Lagrangian relaxation within a FHMM framework. Here, we extend this formulation to a mixed memory Markov coupling model, termed dynamic coupled fault diagnosis (DCFD) problem, to determine the most likely sequence of (dependent) fault states, the one that best explains the observed test outcomes over time. An iterative Gauss-Seidel coordinate ascent optimization method is proposed for solving the DCFD problem. A soft Viterbi algorithm is also implemented within the framework for decoding-dependent fault states over time. We demonstrate the algorithm on simulated systems with coupled faults and the results show that this approach improves the correct isolation rate (CI) as compared to the formulation where independent fault states (DMFD) are assumed. As a by-product, we show empirically that, while diagnosing for independent faults, the DMFD algorithm based on block coordinate ascent method, although it does not provide a measure of suboptimality, provides better primal cost and higher CI than the Lagrangian relaxation method for independent fault case. Two real-world examples (a hybrid electric vehicle, and a mobile autonomous robot) with coupled faults are also used to evaluate the proposed framework. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Model-Based Prognostics With Concurrent Damage Progression Processes

    Publication Year: 2013 , Page(s): 535 - 546
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1208 KB) |  | HTML iconHTML  

    Model-based prognostics approaches rely on physics-based models that describe the behavior of systems and their components. These models must account for the several different damage processes occurring simultaneously within a component. Each of these damage and wear processes contributes to the overall component degradation. We develop a model-based prognostics methodology that consists of a joint state-parameter estimation problem, in which the state of a system along with parameters describing the damage progression are estimated, followed by a prediction problem, in which the joint state-parameter estimate is propagated forward in time to predict end of life and remaining useful life. The state-parameter estimate is computed using a particle filter and is represented as a probability distribution, allowing the prediction of end of life and remaining useful life within a probabilistic framework that supports uncertainty management. We also develop a novel variance control algorithm that maintains an uncertainty bound around the unknown parameters to limit the amount of estimation uncertainty and, consequently, reduce prediction uncertainty. We construct a detailed physics-based model of a centrifugal pump that includes damage progression models, to which we apply our model-based prognostics algorithm. We illustrate the operation of the prognostic solution with a number of simulation-based experiments and demonstrate the performance of the approach when multiple damage mechanisms are active. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic Set-Covering for Real-Time Multiple Fault Diagnosis With Delayed Test Outcomes

    Publication Year: 2013 , Page(s): 547 - 562
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (400 KB) |  | HTML iconHTML  

    The set-covering problem is widely used to model many real-world applications. In this paper, we formulate a generalization of set-covering, termed dynamic set-covering (DSC), which involves a series of coupled set-covering problems over time. We motivate the DSC problem from the viewpoint of a dynamic multiple fault diagnosis problem, wherein faults, possibly intermittent, evolve over time; the fault-test dependencies are deterministic (components associated with passed tests cannot be suspected to be faulty and at least one of the components associated with failed tests is faulty), and the test outcomes may be observed with delay. The objective of the DSC problem is to infer the most probable time sequence of a parsimonious set of failure sources that explains the observed test outcomes over time. The DSC problem is NP-hard and intractable due to the fault-test dependency matrix that couples the failed tests and faults via the constraint matrix, and the temporal dependence of failure sources over time. By relaxing the coupling constraints using Lagrange multipliers, the DSC problem can be decoupled into independent subproblems, one for each fault. Each subproblem is solved using the Viterbi decoding algorithm, and a primal feasible solution is constructed by modifying the Viterbi solutions via a heuristic. The Lagrange multipliers are updated using a subgradient method. The proposed Viterbi-Lagrangian relaxation algorithm provides a measure of suboptimality via an approximate duality gap. As a major practical extension of the above problem, we also consider the problem of diagnosing faults with delayed test outcomes, termed delay DSC. A detailed experimental evaluation of the algorithms is provided using real-world problems that exhibit masking faults. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Competitive Evolution of Tactical Multiswarm Dynamics

    Publication Year: 2013 , Page(s): 563 - 569
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (929 KB) |  | HTML iconHTML  

    The dynamics of large decentralized groups of agents, or swarms, can be difficult to characterize due to complex and often unpredictable behaviors that arise from low-level interactions between agents. When designing multiagent systems, these emergent behaviors can have hidden and undesirable implications on the overall operation of the swarm. This paper examines the use of inversion of swarm dynamics to refine individual agents' rules of operation in order to achieve a given collective goal and applies this method to a scenario of tactical relevance: the point defense of a very important person between two attacking and defending swarms. An alternating competitive evolution is used in a toggled behavioral arms race in order to refine tactics and anticipate counteractions. Results include creative solutions with varying levels of success at addressing defensive tactical scenarios, with the attacking swarms evolving behaviors (such as rushing, splitting, and baiting) and the defending swarm evolving proactive and reactive solutions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Parameterized Schemes of Metaheuristics: Basic Ideas and Applications With Genetic Algorithms, Scatter Search, and GRASP

    Publication Year: 2013 , Page(s): 570 - 586
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1555 KB) |  | HTML iconHTML  

    Some optimization problems can be tackled only with metaheuristic methods, and to obtain a satisfactory metaheuristic, it is necessary to develop and experiment with various methods and to tune them for each particular problem. The use of a unified scheme for metaheuristics facilitates the development of metaheuristics by reutilizing the basic functions. In our proposal, the unified scheme is improved by adding transitional parameters. Those parameters are included in each of the functions, in such a way that different values of the parameters provide different metaheuristics or combinations of metaheuristics. Thus, the unified parameterized scheme eases the development of metaheuristics and their application. In this paper, we expose the basic ideas of the parameterization of metaheuristics. This methodology is tested with the application of local and global search methods (greedy randomized adaptive search procedure [GRASP], genetic algorithms, and scatter search), and their combinations, to three scientific problems: obtaining satisfactory simultaneous equation models from a set of values of the variables, a task-to-processor assignment problem with independent tasks and memory constrains, and the p-hub median location-allocation problem. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • General and Interval Type-2 Fuzzy Face-Space Approach to Emotion Recognition

    Publication Year: 2013 , Page(s): 587 - 605
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1771 KB) |  | HTML iconHTML  

    Facial expressions of a person representing similar emotion are not always unique. Naturally, the facial features of a subject taken from different instances of the same emotion have wide variations. In the presence of two or more facial features, the variation of the attributes together makes the emotion recognition problem more complicated. This variation is the main source of uncertainty in the emotion recognition problem, which has been addressed here in two steps using type-2 fuzzy sets. First a type-2 fuzzy face space is constructed with the background knowledge of facial features of different subjects for different emotions. Second, the emotion of an unknown facial expression is determined based on the consensus of the measured facial features with the fuzzy face space. Both interval and general type-2 fuzzy sets (GT2FS) have been used separately to model the fuzzy face space. The interval type-2 fuzzy set (IT2FS) involves primary membership functions for m facial features obtained from n-subjects, each having l-instances of facial expressions for a given emotion. The GT2FS in addition to employing the primary membership functions mentioned above also involves the secondary memberships for individual primary membership curve, which has been obtained here by formulating and solving an optimization problem. The optimization problem here attempts to minimize the difference between two decoded signals: the first one being the type-1 defuzzification of the average primary membership functions obtained from the n-subjects, while the second one refers to the type-2 defuzzified signal for a given primary membership function with secondary memberships as unknown. The uncertainty management policy adopted using GT2FS has resulted in a classification accuracy of 98.333% in comparison to 91.667% obtained by its interval type-2 counterpart. A small improvement (approximately 2.5%) in classification accuracy by IT2FS has been attained by pre-processing measurements using - he well-known interval approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tracking People Motion Based on Extended Condensation Algorithm

    Publication Year: 2013 , Page(s): 606 - 618
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1916 KB) |  | HTML iconHTML  

    People counting systems are widely used in surveillance applications. In this paper, we present a solution to bidirectional people counting based on information provided by an overhead stereo system. Four fundamental aspects can be identified: the detection and tracking of human motion using an extended particle filter, the use of 3-D measurements in order to increase the system's robustness and a modified K-means algorithm to provide the number of hypotheses at each time, and, finally, trajectory generation to facilitate people counting in different directions. The proposed algorithm is designed to solve problems of occlusion, without counting objects such as shopping trolleys or bags. A processing ratio of around 30 frames/s is necessary in order to capture the real-time trajectory of people and obtain robust tracking results. We validated various test videos, achieving a hit rate between 95% and 99%, depending on the number of people crossing the counting area. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • People Matching for Transportation Planning Using Texel Camera Data for Sequential Estimation

    Publication Year: 2013 , Page(s): 619 - 629
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (380 KB) |  | HTML iconHTML  

    This paper addresses automatic people matching in the dynamic setting of public transportation, such as a bus, as people enter and then at some later time exit from a doorway. Matching a person entering to the same person exiting at a later time provides accurate information about individual riders, such as how long a person is on a bus and the associated stops the person uses. At a higher level, matching exits to previous entry events provides information about the distribution of traffic flow across the whole transportation system. The proposed techniques may be applied at any gateway where the flow of human traffic is to be analyzed. For the purpose of associating entry and exit events, a trellis optimization algorithm is used for sequence estimation, based on multiple texel camera measurements. Since the number of states in the trellis exponentially grows with the number of persons currently on the bus, a beam search pruning technique is employed to manage the computational and memory load. Experimental results using real texel camera measurements show 96% matching accuracy for 68 people exiting a bus in a randomized order. In a bus route simulation where a true traffic flow distribution is used to randomly draw entry and exit events for simulated riders, the proposed sequence estimation algorithm produces an estimated traffic flow distribution, which provides an excellent match to the true distribution. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Teleoperation Approach for Mobile Social Robots Incorporating Automatic Gaze Control and Three-Dimensional Spatial Visualization

    Publication Year: 2013 , Page(s): 630 - 642
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1170 KB) |  | HTML iconHTML  

    The teleoperation of mobile social robots requires operators to understand facial gestures and other nonverbal communication from a person interacting with the robot. It is also critical for the operator to comprehend the surrounding environment in order to facilitate both navigation and human-robot interaction. Allowing the operator to control the robot's gaze direction can help the operator observe a person's nonverbal communication; however, manually actuating a gaze increases the operator's workload and conflicts with the use of the robot's camera for navigation. To address these problems, the authors developed a teleoperation system that combines automatic control of the robot's gaze and a 3-D graphical representation of the surrounding environment, such as location of items and configuration of a shop. A study where a robot plays the role of a shopkeeper was conducted to validate the effectiveness of the proposed gaze-control technique and control interface. It was demonstrated that the combination of automatic gaze control and representations of spatial relationships improved the quality of the robot's interaction with the customer. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Incremental Lifecycle Validation of Knowledge-Based Systems Through CommonKADS

    Publication Year: 2013 , Page(s): 643 - 654
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (384 KB) |  | HTML iconHTML  

    This paper introduces an incremental validation method for knowledge-based systems (KBSs) based on a lifecycle model of system development. Although many validation methods have been proposed for KBSs, there remains a need for an incremental validation method based on a lifecycle model. Lifecycle models provide a formal framework for the developer that can be highly beneficial for the validation process. CommonKADS is the most commonly accepted of such lifecycle models. It offers a de facto standard for building KBSs. Moreover, the incremental validation method introduced in this paper is based on case testing and provides strict guidelines to selecting a set of test cases to validate the system. Most importantly, this validation method makes use of results of prior test cases to guide the use of later test cases in subsequent development iterations. This facilitates the definition of an efficient set of test cases that provides effective system coverage. The proposed incremental validation method is evaluated, and the results are reported. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust Nonlinear Control of an Intrinsically Compliant Robotic Gait Training Orthosis

    Publication Year: 2013 , Page(s): 655 - 665
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (729 KB) |  | HTML iconHTML  

    Robot-assisted gait therapy is an emerging rehabilitation practice. This paper presents new experimental results with an intrinsically compliant robotic gait training orthosis and a trajectory tracking controller. The intrinsically compliant robotic orthosis has six degrees of freedom. Sagittal plane hip and knee joints were powered by the actuation of pneumatic muscle actuators in opposing pair configuration. The orthosis has passive hip abduction/adduction joint and passive mechanisms to allow vertical and lateral translations of the trunk. A passive foot lifter having a spring mechanism was used to ensure sufficient dorsiflexion during swing phase. A trajectory tracking controller based on a chattering-free robust variable structure control law was implemented in joint space to guide the subject's limbs on physiological gait trajectories. The performance of the robotic orthosis was evaluated during two gait training modes, namely, “trajectory tracking mode with maximum compliance” and “trajectory tracking mode with minimum compliance.” The experimental evaluations were carried out with ten neurologically intact subjects. The results show that the robotic orthosis is able to perform the gait training task during the two gait training modes. All the subjects tend to deviate from the reference joint angle trajectories with an increase in robotic compliance as the subjects have more freedom to voluntarily drive the robotic orthosis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Effect of Sleep Deprivation on Functional Connectivity of EEG Channels

    Publication Year: 2013 , Page(s): 666 - 672
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (703 KB) |  | HTML iconHTML  

    This paper presents the functional interdependences among electroencephalograph (EEG) signals collected from human subjects undergoing a controlled experiment over a period of 36 h of sleep deprivation. The EEG signals were recorded from 19 electrodes spread all over the scalp. The interdependence among the signals was measured using synchronization likelihood (SL), which measures the dynamical (both linear and nonlinear) interdependence between two or more nonstationary time series. A network structure was evolved based on these SL values. The EEG signal being nonstationary, instead of the frequency bands, the connectivity was evaluated at various intrinsic modes known as intrinsic mode functions (IMFs). These IMFs were generated using empirical mode decomposition. It was observed that the connectivity of the networks exhibits definite patterns at specific IMFs with increase in sleep deprivation at successive stages of the experiment. The results were validated using subjective assessment and audiovisual response tests. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The TFC Model: Tensor Factorization and Tag Clustering for Item Recommendation in Social Tagging Systems

    Publication Year: 2013 , Page(s): 673 - 688
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1577 KB) |  | HTML iconHTML  

    In this paper, a novel Tensor Factorization and tag Clustering (TFC) model is presented for item recommendation in social tagging systems. The TFC model consists of three distinctive steps, in each of which important innovative elements are proposed. More specifically, through its first step, the content information is exploited to propagate tags between conceptual similar items based on a relevance feedback mechanism, in order to solve sparsity and “cold start” problems. Through its second step, sparsity is further handled, by generating tag clusters and revealing topics, following an innovative tf ·idf weighting scheme. Furthermore, we experimentally prove that a few number of expert tags can improve the performance of quality recommendations, since they contribute to more coherent tag clusters. Through its third step, the latent associations among users, topics, and items are revealed by exploiting the TF technique of high order singular value decomposition. This way the proposed TFC model tackles problems of real-world applications, which produce noise and decrease the quality of recommendations. In our experiments with real-world social data, we show that the proposed TFC model outperforms other state-of-the-art methods, which also exploit the TF technique of HOSVD. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • H Controller Design of Networked Control Systems with Markov Packet Dropouts

    Publication Year: 2013 , Page(s): 689 - 697
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (327 KB) |  | HTML iconHTML  

    This paper presents an H controller design method for networked control systems (NCSs) with bounded packet dropouts. A new model is proposed to represent packet dropouts satisfying a Markov process and late-arrival packets. The closed-loop NCS with a state-feedback controller is transformed into a Markov system, which is convenient for the controller synthesis. Two types of state-feedback control laws are taken into account. Sufficient conditions on the existence of controllers for stochastic stability with an H disturbance attenuation level are derived through a Lyapunov function dependent on the upper bound of the number of consecutive packet dropouts. A numerical example is finally provided to show the effectiveness of the proposed method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stochastic Analysis of a Standby System With Waiting Repair Strategy

    Publication Year: 2013 , Page(s): 698 - 707
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (672 KB) |  | HTML iconHTML  

    This paper investigates the reliability of a standby system incorporating waiting time to repair. The considered system consists of two units, namely, the main unit and the standby unit. Whenever the main unit fails, the whole load is transferred to the standby unit instantaneously by a switching-over device. As regards to the repairing of the main unit, it has to wait for repair whenever it fails due to unavailability of repair facility. When both the main and standby units fail, then the system goes to the complete failure mode. The system may also fail due to incorrect start of the system, which can occur due to an untrained and inexperience operator. The repair of the main and standby units follows general distribution, whereas repair due to human error is obtained with the help of Gumbel-Hougaard family copula. The system is analyzed by supplementary variable technique and Laplace transformation. Various reliability measures like availability, mean time to failure, and profit function have been evaluated for the considered system. A numerical example with a way to illustrate the utility of the model has also been presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Event Detection and Localization in Acyclic Flow Networks

    Publication Year: 2013 , Page(s): 708 - 723
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1639 KB) |  | HTML iconHTML  

    Acyclic flow networks, present in many infrastructures of national importance (e.g., oil and gas and water distribution systems), have been attracting immense research interest. Existing solutions for detecting and locating attacks against these infrastructures have been proven costly and imprecise, particularly when dealing with large-scale distribution systems. In this article, to the best of our knowledge, for the first time, we investigate how mobile sensor networks can be used for optimal event detection and localization in acyclic flow networks. We propose the idea of using sensors that move along the edges of the network and detect events (i.e., attacks). To localize the events, sensors detect proximity to beacons, which are devices with known placement in the network. We formulate the problem of minimizing the cost of monitoring infrastructure (i.e., minimizing the number of sensors and beacons deployed) in a predetermined zone of interest, while ensuring a degree of coverage by sensors and a required accuracy in locating events using beacons. We propose algorithms for solving the aforementioned problem and demonstrate their effectiveness with results obtained from a realistic flow network simulator. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Soundness for Resource-Constrained Workflow Nets Is Decidable

    Publication Year: 2013 , Page(s): 724 - 729
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (233 KB) |  | HTML iconHTML  

    We investigate the verification of the soundness property for workflow nets (WF-nets) extended with resources, thereby considering the most general instance of soundness, which requires that, for any number of instances, the WF-net has always the possibility to terminate, for a certain initial (finite) number of resource items per resource type; moreover, adding additional resources to a sound net does not violate the result. We prove that this problem is decidable by reducing it to a home-space problem, and we show how soundness can be decided by using the procedure for deciding a home-space property. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Incremental Online Learning of Robot Behaviors From Selected Multiple Kinesthetic Teaching Trials

    Publication Year: 2013 , Page(s): 730 - 740
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1324 KB) |  | HTML iconHTML  

    This paper presents a new approach to the incremental online learning of behaviors by a robot from multiple kinesthetic teaching trials. The approach enables a robot to refine and reproduce a specific behavior every time a new teaching trial is provided and to decide autonomously whether to accept or reject each trial. The robot neglects bad teaching trials and learns a behavior based on adequate teaching trials. The framework of this approach consists of the projection of motion data to a latent space and the description of motion data in a Gaussian mixture model (GMM). To realize the incremental online learning, the latent space and the GMM are refined incrementally after each proper teaching trial. The trial data are discarded after being used. The number of Gaussian components in the GMM is not initially fixed but is autonomously selected by the robot over the trials. The proposed method is more suitable for practical human-robot interaction. The experiments with a humanoid robot show the feasibility of the approach. We demonstrate that the robot can incrementally refine and reproduce learned behaviors that accurately represent the essential characteristics of the teaching trials through our learning algorithm and that it can reject erroneous teaching trials to improve learning performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Systems, Man, and Cybernetics Society Information

    Publication Year: 2013 , Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (98 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Human-Machine Systems information for authors

    Publication Year: 2013 , Page(s): C4
    Save to Project icon | Request Permissions | PDF file iconPDF (108 KB)  
    Freely Available from IEEE

Aims & Scope

The scope of the IEEE Transactions on Systems, Man, and Cybernetics: Systems includes the fields of systems engineering.

 

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
C. L. Philip Chen
The University of Macau