Scheduled System Maintenance:
Some services will be unavailable Sunday, March 29th through Monday, March 30th. We apologize for the inconvenience.
By Topic

Cybernetics, IEEE Transactions on

Early Access Articles

Early Access articles are new content made available in advance of the final electronic or print versions and result from IEEE's Preprint or Rapid Post processes. Preprint articles are peer-reviewed but not fully edited. Rapid Post articles are peer-reviewed and edited but not paginated. Both these types of Early Access articles are fully citable from the moment they appear in IEEE Xplore.

Filter Results

Displaying Results 1 - 25 of 213
  • Adaptive Replacement Strategies for MOEA/D

    Publication Year: 2015 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2449 KB)  

    Multiobjective evolutionary algorithms based on decomposition (MOEA/D) decompose a multiobjective optimization problem into a set of simple optimization subproblems and solve them in a collaborative manner. A replacement scheme, which assigns a new solution to a subproblem, plays a key role in balancing diversity and convergence in MOEA/D. This paper proposes a global replacement scheme which assigns a new solution to its most suitable subproblems. We demonstrate that the replacement neighborhood size is critical for population diversity and convergence, and develop an approach for adjusting this size dynamically. A steady-state algorithm and a generational one with this approach have been designed and experimentally studied. The experimental results on a number of test problems have shown that the proposed algorithms have some advantages. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamical Behaviors of Multiple Equilibria in Competitive Neural Networks With Discontinuous Nonmonotonic Piecewise Linear Activation Functions

    Publication Year: 2015 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2731 KB)  

    This paper addresses the problem of coexistence and dynamical behaviors of multiple equilibria for competitive neural networks. First, a general class of discontinuous nonmonotonic piecewise linear activation functions is introduced for competitive neural networks. Then based on the fixed point theorem and theory of strict diagonal dominance matrix, it is shown that under some conditions, such n-neuron competitive neural networks can have 5ⁿ equilibria, among which 3ⁿ equilibria are locally stable and the others are unstable. More importantly, it is revealed that the neural networks with the discontinuous activation functions introduced in this paper can have both more total equilibria and locally stable equilibria than the ones with other activation functions, such as the continuous Mexican-hat-type activation function and discontinuous two-level activation function. Furthermore, the 3ⁿ locally stable equilibria given in this paper are located in not only saturated regions, but also unsaturated regions, which is different from the existing results on multistability of neural networks with multiple level activation functions. A simulation example is provided to illustrate and validate the theoretical findings. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Learning Trajectories for Robot Programing by Demonstration Using a Coordinated Mixture of Factor Analyzers

    Publication Year: 2015 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1469 KB)  

    This paper presents an approach for learning robust models of humanoid robot trajectories from demonstration. In this formulation, a model of the joint space trajectory is represented as a sequence of motion primitives where a nonlinear dynamical system is learned by constructing a hidden Markov model (HMM) predicting the probability of residing in each motion primitive. With a coordinated mixture of factor analyzers as the emission probability density of the HMM, we are able to synthesize motion from a dynamic system acting along a manifold shared by both demonstrator and robot. This provides significant advantages in model complexity for kinematically redundant robots and can reduce the number of corresponding observations required for further learning. A stability analysis shows that the system is robust to deviations from the expected trajectory as well as transitional motion between manifolds. This approach is demonstrated experimentally by recording human motion with inertial sensors, learning a motion primitive model and correspondence map between the human and robot, and synthesizing motion from the manifold to control a 19 degree-of-freedom humanoid robot. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Finite Time Control Design for Bilateral Teleoperation System With Position Synchronization Error Constrained

    Publication Year: 2015 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1846 KB)  

    Due to the cognitive limitations of the human operator and lack of complete information about the remote environment, the work performance of such teleoperation systems cannot be guaranteed in most cases. However, some practical tasks conducted by the teleoperation system require high performances, such as tele-surgery needs satisfactory high speed and more precision control results to guarantee patient' health status. To obtain some satisfactory performances, the error constrained control is employed by applying the barrier Lyapunov function (BLF). With the constrained synchronization errors, some high performances, such as, high convergence speed, small overshoot, and an arbitrarily predefined small residual constrained synchronization error can be achieved simultaneously. Nevertheless, like many classical control schemes only the asymptotic/exponential convergence, i.e., the synchronization errors converge to zero as time goes infinity can be achieved with the error constrained control. It is clear that finite time convergence is more desirable. To obtain a finite-time synchronization performance, the terminal sliding mode (TSM)-based finite time control method is developed for teleoperation system with position error constrained in this paper. First, a new nonsingular fast terminal sliding mode (NFTSM) surface with new transformed synchronization errors is proposed. Second, adaptive neural network system is applied for dealing with the system uncertainties and the external disturbances. Third, the BLF is applied to prove the stability and the nonviolation of the synchronization errors constraints. Finally, some comparisons are conducted in simulation and experiment results are also presented to show the effectiveness of the proposed method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimized Assistive Human--Robot Interaction Using Reinforcement Learning

    Publication Year: 2015 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1468 KB)  

    An intelligent human--robot interaction (HRI) system with adjustable robot behavior is presented. The proposed HRI system assists the human operator to perform a given task with minimum workload demands and optimizes the overall human--robot system performance. Motivated by human factor studies, the presented control structure consists of two control loops. First, a robot-specific neuro-adaptive controller is designed in the inner loop to make the unknown nonlinear robot behave like a prescribed robot impedance model as perceived by a human operator. In contrast to existing neural network and adaptive impedance-based control methods, no information of the task performance or the prescribed robot impedance model parameters is required in the inner loop. Then, a task-specific outer-loop controller is designed to find the optimal parameters of the prescribed robot impedance model to adjust the robot's dynamics to the operator skills and minimize the tracking error. The outer loop includes the human operator, the robot, and the task performance details. The problem of finding the optimal parameters of the prescribed robot impedance model is transformed into a linear quadratic regulator (LQR) problem which minimizes the human effort and optimizes the closed-loop behavior of the HRI system for a given task. To obviate the requirement of the knowledge of the human model, integral reinforcement learning is used to solve the given LQR problem. Simulation results on an x-y table and a robot arm, and experimental implementation results on a PR2 robot confirm the suitability of the proposed method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Control Synthesis of Discrete-Time T-S Fuzzy Systems via a Multi-Instant Homogenous Polynomial Approach

    Publication Year: 2015 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (755 KB)  

    This paper deals with the problem of control synthesis of discrete-time Takagi--Sugeno fuzzy systems by employing a novel multiinstant homogenous polynomial approach. A new multiinstant fuzzy control scheme and a new class of fuzzy Lyapunov functions, which are homogenous polynomially parameter-dependent on both the current-time normalized fuzzy weighting functions and the past-time normalized fuzzy weighting functions, are proposed for implementing the object of relaxed control synthesis. Then, relaxed stabilization conditions are derived with less conservatism than existing ones. Furthermore, the relaxation quality of obtained stabilization conditions is further ameliorated by developing an efficient slack variable approach, which presents a multipolynomial dependence on the normalized fuzzy weighting functions at the current and past instants of time. Two simulation examples are given to demonstrate the effectiveness and benefits of the results developed in this paper. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multiobjective Optimization of Linear Cooperative Spectrum Sensing: Pareto Solutions and Refinement

    Publication Year: 2015 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1757 KB)  

    In linear cooperative spectrum sensing, the weights of secondary users and detection threshold should be optimally chosen to minimize missed detection probability and to maximize secondary network throughput. Since these two objectives are not completely compatible, we study this problem from the viewpoint of multiple-objective optimization. We aim to obtain a set of evenly distributed Pareto solutions. To this end, here, we introduce the normal constraint (NC) method to transform the problem into a set of single-objective optimization (SOO) problems. Each SOO problem usually results in a Pareto solution. However, NC does not provide any solution method to these SOO problems, nor any indication on the optimal number of Pareto solutions. Furthermore, NC has no preference over all Pareto solutions, while a designer may be only interested in some of them. In this paper, we employ a stochastic global optimization algorithm to solve the SOO problems, and then propose a simple method to determine the optimal number of Pareto solutions under a computational complexity constraint. In addition, we extend NC to refine the Pareto solutions and select the ones of interest. Finally, we verify the effectiveness and efficiency of the proposed methods through computer simulations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multiobjective Vehicle Routing Problems With Simultaneous Delivery and Pickup and Time Windows: Formulation, Instances, and Algorithms

    Publication Year: 2015 , Page(s): 1
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1071 KB)  

    {}This paper investigates a practical variant of the vehicle routing problem (VRP), called VRP with simultaneous delivery and pickup and time windows (VRPSDPTW), in the logistics industry. VRPSDPTW is an important logistics problem in closed-loop supply chain network optimization. VRPSDPTW exhibits multiobjective properties in real-world applications. In this paper, a general multiobjective VRPSDPTW (MO-VRPSDPTW) with five objectives is first defined, and then a set of MO-VRPSDPTW instances based on data from the real-world are introduced. These instances represent more realistic multiobjective nature and more challenging MO-VRPSDPTW cases. Finally, two algorithms, multiobjective local search (MOLS) and multiobjective memetic algorithm (MOMA), are designed, implemented and compared for solving MO-VRPSDPTW. The simulation results on the proposed real-world instances and traditional instances show that MOLS outperforms MOMA in most of instances. However, the superiority of MOLS over MOMA in real-world instances is not so obvious as in traditional instances. View full abstract»

    Open Access
  • Multivariate Discretization Based on Evolutionary Cut Points Selection for Classification

    Publication Year: 2015 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3720 KB)  

    Discretization is one of the most relevant techniques for data preprocessing. The main goal of discretization is to transform numerical attributes into discrete ones to help the experts to understand the data more easily, and it also provides the possibility to use some learning algorithms which require discrete data as input, such as Bayesian or rule learning. We focus our attention on handling multivariate classification problems, where high interactions among multiple attributes exist. In this paper, we propose the use of evolutionary algorithms to select a subset of cut points that defines the best possible discretization scheme of a data set using a wrapper fitness function. We also incorporate a reduction mechanism to successfully manage the multivariate approach on large data sets. Our method has been compared with the best state-of-the-art discretizers on 45 real datasets. The experiments show that our proposed algorithm overcomes the rest of the methods producing competitive discretization schemes in terms of accuracy, for C4.5, Naive Bayes, PART, and PrUning and BuiLding Integrated in Classification classifiers; and obtained far simpler solutions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Large-Scale Aerial Image Categorization Using a Multitask Topological Codebook

    Publication Year: 2015 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1849 KB)  

    Fast and accurately categorizing the millions of aerial images on Google Maps is a useful technique in pattern recognition. Existing methods cannot handle this task successfully due to two reasons: 1) the aerial images' topologies are the key feature to distinguish their categories, but they cannot be effectively encoded by a conventional visual codebook and 2) it is challenging to build a realtime image categorization system, as some geo-aware Apps update over 20 aerial images per second. To solve these problems, we propose an efficient aerial image categorization algorithm. It focuses on learning a discriminative topological codebook of aerial images under a multitask learning framework. The pipeline can be summarized as follows. We first construct a region adjacency graph (RAG) that describes the topology of each aerial image. Naturally, aerial image categorization can be formulated as RAG-to-RAG matching. According to graph theory, RAG-to-RAG matching is conducted by enumeratively comparing all their respective graphlets (i.e., small subgraphs). To alleviate the high time consumption, we propose to learn a codebook containing topologies jointly discriminative to multiple categories. The learned topological codebook guides the extraction of the discriminative graphlets. Finally, these graphlets are integrated into an AdaBoost model for predicting aerial image categories. Experimental results show that our approach is competitive to several existing recognition models. Furthermore, over 24 aerial images are processed per second, demonstrating that our approach is ready for real-world applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Consensus Control With Failure-Wait or Abandon?

    Publication Year: 2015 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (788 KB)  

    This paper introduces and solves a decision-making problem under the context of consensus control with failure. We study an optimal consensus control problem in which n autonomous agents try to arrive at a target at the same time. One of the agents suddenly fails and the rest n - 1 agents can either wait or abandon the failed agent. If they wait, they must slow down and delay the consensus time. If they abandon the failed agent, they can reach consensus earlier at the cost of losing one agent at consensus. This cost is an added delay to the consensus time. The decision problem is to decide whether to wait or abandon and, if abandon, when? To solve this problem, we derive analytical expressions and establish structural properties for target distance functions. We use numerical examples and simulation examples to demonstrate the applications of the derived formulas and results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Improved Multiobjective Optimization Evolutionary Algorithm Based on Decomposition for Complex Pareto Fronts

    Publication Year: 2015 , Page(s): 1
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (2630 KB)  

    The multiobjective evolutionary algorithm based on decomposition (MOEA/D) has been shown to be very efficient in solving multiobjective optimization problems (MOPs). In practice, the Pareto-optimal front (POF) of many MOPs has complex characteristics. For example, the POF may have a long tail and sharp peak and disconnected regions, which significantly degrades the performance of MOEA/D. This paper proposes an improved MOEA/D for handling such kind of complex problems. In the proposed algorithm, a two-phase strategy (TP) is employed to divide the whole optimization procedure into two phases. Based on the crowdedness of solutions found in the first phase, the algorithm decides whether or not to delicate computational resources to handle unsolved subproblems in the second phase. Besides, a new niche scheme is introduced into the improved MOEA/D to guide the selection of mating parents to avoid producing duplicate solutions, which is very helpful for maintaining the population diversity when the POF of the MOP being optimized is discontinuous. The performance of the proposed algorithm is investigated on some existing benchmark and newly designed MOPs with complex POF shapes in comparison with several MOEA/D variants and other approaches. The experimental results show that the proposed algorithm produces promising performance on these complex problems. View full abstract»

    Open Access
  • A Level Set Approach to Image Segmentation With Intensity Inhomogeneity

    Publication Year: 2015 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1455 KB)  

    It is often a difficult task to accurately segment images with intensity inhomogeneity, because most of representative algorithms are region-based that depend on intensity homogeneity of the interested object. In this paper, we present a novel level set method for image segmentation in the presence of intensity inhomogeneity. The inhomogeneous objects are modeled as Gaussian distributions of different means and variances in which a sliding window is used to map the original image into another domain, where the intensity distribution of each object is still Gaussian but better separated. The means of the Gaussian distributions in the transformed domain can be adaptively estimated by multiplying a bias field with the original signal within the window. A maximum likelihood energy functional is then defined on the whole image region, which combines the bias field, the level set function, and the piecewise constant function approximating the true image signal. The proposed level set method can be directly applied to simultaneous segmentation and bias correction for 3 and 7T magnetic resonance images. Extensive evaluation on synthetic and real-images demonstrate the superiority of the proposed method over other representative algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Ensemble and Arithmetic Recombination-Based Speciation Differential Evolution for Multimodal Optimization

    Publication Year: 2015 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2691 KB)  

    Multimodal optimization problems consists of multiple equal or comparable spatially distributed solutions. Niching and clustering differential evolution (DE) techniques have been demonstrated to be highly effective for solving such problems. The key challenge in the speciation niching technique is to balance between local solution exploitation and global exploration. Our proposal enhances exploration by applying arithmetic recombination with speciation and improves exploitation of individual peaks by applying neighborhood mutation with ensemble strategies. Our novel algorithm, called ensemble and arithmetic recombination-based speciation DE, is shown to either outperform or perform comparably to the state-of-the-art algorithms on 29 common multimodal benchmark problems. Comparable performance is observed only when some problems are solved perfectly by the algorithms in the literature. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Self-Adaptive Differential Evolution Algorithm With Zoning Evolution of Control Parameters and Adaptive Mutation Strategies

    Publication Year: 2015 , Page(s): 1
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3309 KB)  

    The performance of the differential evolution (DE) algorithm is significantly affected by the choice of mutation strategies and control parameters. Maintaining the search capability of various control parameter combinations throughout the entire evolution process is also a key issue. A self-adaptive DE algorithm with zoning evolution of control parameters and adaptive mutation strategies is proposed in this paper. In the proposed algorithm, the mutation strategies are automatically adjusted with population evolution, and the control parameters evolve in their own zoning to self-adapt and discover near optimal values autonomously. The proposed algorithm is compared with five state-of-the-art DE algorithm variants according to a set of benchmark test functions. Furthermore, seven nonparametric statistical tests are implemented to analyze the experimental results. The results indicate that the overall performance of the proposed algorithm is better than those of the five existing improved algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Local Structural Descriptor for Image Matching via Normalized Graph Laplacian Embedding

    Publication Year: 2015 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2233 KB)  

    This paper investigates graph spectral approaches to the problem of point pattern matching. Specifically, we concentrate on the issue of how to effectively use graph spectral properties to characterize point patterns in the presence of positional jitter and outliers. A novel local spectral descriptor is proposed to represent the attribute domain of feature points. For a point in a given point-set, weight graphs are constructed on its neighboring points and then their normalized Laplacian matrices are computed. According to the known spectral radius of the normalized Laplacian matrix, the distribution of the eigenvalues of these normalized Laplacian matrices is summarized as a histogram to form a descriptor. The proposed spectral descriptor is finally combined with the approximate distance order for recovering correspondences between point-sets. Extensive experiments demonstrate the effectiveness of the proposed approach and its superiority to the existing methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Analysis of Image Contrast: From Quality Assessment to Automatic Enhancement

    Publication Year: 2015 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2608 KB)  

    Proper contrast change can improve the perceptual quality of most images, but it has largely been overlooked in this paper of image quality assessment (IQA). To fill this void, we, in this paper, first report a new large dedicated contrast-changed image database (CCID2014), which includes 655 images and associated subjective ratings recorded from 22 inexperienced observers. We, then, present a novel reduced-reference image quality metric for contrast change (RIQMC) using phase congruency and statistics information of the image histogram. Validation of the proposed model is conducted on contrast related CCID2014, TID2008, categorical image quality (CSIQ), and TID2013 databases, and results justify the superiority and efficiency of RIQMC over a majority of classical and state-of-the-art IQA methods. Furthermore, we combine aforesaid subjective and objective assessments to derive the RIQMC-based optimal histogram mapping (ROHIM) for automatic contrast enhancement, which is shown to outperform recently developed enhancement technologies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Multiobjective Genetic Programming-Based Ensemble for Simultaneous Feature Selection and Classification

    Publication Year: 2015 , Page(s): 1
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1097 KB)  

    We present an integrated algorithm for simultaneous feature selection (FS) and designing of diverse classifiers using a steady state multiobjective genetic programming (GP), which minimizes three objectives: 1) false positives (FPs); 2) false negatives (FNs); and 3) the number of leaf nodes in the tree. Our method divides a c-class problem into c binary classification problems. It evolves c sets of genetic programs to create c ensembles. During mutation operation, our method exploits the fitness as well as unfitness of features, which dynamically change with generations with a view to using a set of highly relevant features with low redundancy. The classifiers of $i${th} class determine the {net belongingness} of an unknown data point to the $i${th} class using a weighted voting scheme, which makes use of the FP and FN mistakes made on the training data. We test our method on eight microarray and 11 text data sets with diverse number of classes (from 2 to 44), large number of features (from 2000 to 49,151), and high feature-to-sample ratio (from 1.03 to 273.1). We compare our method with a bi-objective GP scheme that does not use any FS and rule size reduction strategy. It depicts the effectiveness of the proposed FS and rule size reduction schemes. Furthermore, we compare our method with four classification methods in conjunction with six features selection algorithms and full feature set. Our scheme performs the best for 380 out of 474 combinations of data sets, algorithm and FS method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fuzzy Adaptive Quantized Control for a Class of Stochastic Nonlinear Uncertain Systems

    Publication Year: 2015 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3715 KB)  

    In this paper, a fuzzy adaptive approach for stochastic strict-feedback nonlinear systems with quantized input signal is developed. Compared with the existing research on quantized input problem, the existing works focus on quantized stabilization, while this paper considers the quantized tracking problem, which recovers stabilization as a special case. In addition, uncertain nonlinearity and the unknown stochastic disturbances are simultaneously considered in the quantized feedback control systems. By putting forward a new nonlinear decomposition of the quantized input, the relationship between the control signal and the quantized signal is established, as a result, the major technique difficulty arising from the piece-wise quantized input is overcome. Based on fuzzy logic systems' universal approximation capability, a novel fuzzy adaptive tracking controller is constructed via backstepping technique. The proposed controller guarantees that the tracking error converges to a neighborhood of the origin in the sense of probability and all the signals in the closed-loop system remain bounded in probability. Finally, an example illustrates the effectiveness of the proposed control approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Integral Invariants for Effective 3-D Motion Trajectory Matching and Recognition

    Publication Year: 2015 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3422 KB)  

    Motion trajectories tracked from the motions of human, robots, and moving objects can provide an important clue for motion analysis, classification, and recognition. This paper defines some new integral invariants for a 3-D motion trajectory. Based on two typical kernel functions, we design two integral invariants, the distance and area integral invariants. The area integral invariants are estimated based on the blurred segment of noisy discrete curve to avoid the computation of high-order derivatives. Such integral invariants for a motion trajectory enjoy some desirable properties, such as computational locality, uniqueness of representation, and noise insensitivity. Moreover, our formulation allows the analysis of motion trajectories at a range of scales by varying the scale of kernel function. The features of motion trajectories can thus be perceived at multiscale levels in a coarse-to-fine manner. Finally, we define a distance function to measure the trajectory similarity to find similar trajectories. Through the experiments, we examine the robustness and effectiveness of the proposed integral invariants and find that they can capture the motion cues in trajectory matching and sign recognition satisfactorily. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Graph Embedded Extreme Learning Machine

    Publication Year: 2015 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2905 KB)  

    In this paper, we propose a novel extension of the extreme learning machine (ELM) algorithm for single-hidden layer feedforward neural network training that is able to incorporate subspace learning (SL) criteria on the optimization process followed for the calculation of the network's output weights. The proposed graph embedded ELM (GEELM) algorithm is able to naturally exploit both intrinsic and penalty SL criteria that have been (or will be) designed under the graph embedding framework. In addition, we extend the proposed GEELM algorithm in order to be able to exploit SL criteria in arbitrary (even infinite) dimensional ELM spaces. We evaluate the proposed approach on eight standard classification problems and nine publicly available datasets designed for three problems related to human behavior analysis, i.e., the recognition of human face, facial expression, and activity. Experimental results denote the effectiveness of the proposed approach, since it outperforms other ELM-based classification schemes in all the cases. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Semi-Supervised Text Classification With Universum Learning

    Publication Year: 2015 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (899 KB)  

    Universum, a collection of nonexamples that do not belong to any class of interest, has become a new research topic in machine learning. This paper devises a semi-supervised learning with Universum algorithm based on boosting technique, and focuses on situations where only a few labeled examples are available. We also show that the training error of AdaBoost with Universum is bounded by the product of normalization factor, and the training error drops exponentially fast when each weak classifier is slightly better than random guessing. Finally, the experiments use four data sets with several combinations. Experimental results indicate that the proposed algorithm can benefit from Universum examples and outperform several alternative methods, particularly when insufficient labeled examples are available. When the number of labeled examples is insufficient to estimate the parameters of classification functions, the Universum can be used to approximate the prior distribution of the classification functions. The experimental results can be explained using the concept of Universum introduced by Vapnik, that is, Universum examples implicitly specify a prior distribution on the set of classification functions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Two-Stage Learning to Predict Human Eye Fixations via SDAEs

    Publication Year: 2015 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1662 KB)  

    Saliency detection models aiming to quantitatively predict human eye-attended locations in the visual field have been receiving increasing research interest in recent years. Unlike traditional methods that rely on hand-designed features and contrast inference mechanisms, this paper proposes a novel framework to learn saliency detection models from raw image data using deep networks. The proposed framework mainly consists of two learning stages. At the first learning stage, we develop a stacked denoising autoencoder (SDAE) model to learn robust, representative features from raw image data under an unsupervised manner. The second learning stage aims to jointly learn optimal mechanisms to capture the intrinsic mutual patterns as the feature contrast and to integrate them for final saliency prediction. Given the input of pairs of a center patch and its surrounding patches represented by the features learned at the first stage, a SDAE network is trained under the supervision of eye fixation labels, which achieves both contrast inference and contrast integration simultaneously. Experiments on three publically available eye tracking benchmarks and the comparisons with 16 state-of-the-art approaches demonstrate the effectiveness of the proposed framework. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • MOD* Lite: An Incremental Path Planning Algorithm Taking Care of Multiple Objectives

    Publication Year: 2015 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1188 KB)  

    The need for determining a path from an initial location to a target one is a crucial task in many applications, such as virtual simulations, robotics, and computer games. Almost all of the existing algorithms are designed to find optimal or suboptimal solutions considering only a single objective, namely path length. However, in many real life application path length is not the sole criteria for optimization, there are more than one criteria to be optimized that cannot be transformed to each other. In this paper, we introduce a novel multiobjective incremental algorithm, multiobjective D* lite (MOD* lite) built upon a well-known path planning algorithm, D* lite. A number of experiments are designed to compare the solution quality and execution time requirements of MOD* lite with the multiobjective A* algorithm, an alternative genetic algorithm we developed multiobjective genetic path planning and the strength Pareto evolutionary algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Block-Row Sparse Multiview Multilabel Learning for Image Classification

    Publication Year: 2015 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (864 KB)  

    In image analysis, the images are often represented by multiple visual features (also known as multiview features), that aim to better interpret them for achieving remarkable performance of the learning. Since the processes of feature extraction on each view are separated, the multiple visual features of images may include overlap, noise, and redundancy. Thus, learning with all the derived views of the data could decrease the effectiveness. To address this, this paper simultaneously conducts a hierarchical feature selection and a multiview multilabel (MVML) learning for multiview image classification, via embedding a proposed a new block-row regularizer into the MVML framework. The block-row regularizer concatenating a Frobenius norm (F-norm) regularizer and an ℓ2,1-norm regularizer is designed to conduct a hierarchical feature selection, in which the F-norm regularizer is used to conduct a high-level feature selection for selecting the informative views (i.e., discarding the uninformative views) and the ℓ2,1-norm regularizer is then used to conduct a low-level feature selection on the informative views. The rationale of the use of a block-row regularizer is to avoid the issue of the over-fitting (via the block-row regularizer), to remove redundant views and to preserve the natural group structures of data (via the F-norm regularizer), and to remove noisy features (the ℓ2,1-norm regularizer), respectively. We further devise a computationally efficient algorithm to optimize the derived objective function and also theoretically prove the convergence of the proposed optimization method. Finally, the results on real image datasets show that the proposed method outperforms two baseline algorithms and three state-of-the-art algorithms in terms of classification performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The scope of the IEEE Transactions on Cybernetics includes computational approaches to the field of cybernetics.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Prof. Jun Wang
Dept. of Mechanical & Automation Engineering
The Chinese University of Hong Kong
Shatin, New Territories, Hong Kong
Tel: +852 39438472
Email: ieee-tcyb@mae.cuhk.edu.hk