<![CDATA[ Neural Networks and Learning Systems, IEEE Transactions on - new TOC ]]>
http://ieeexplore.ieee.org
TOC Alert for Publication# 5962385 2015April 23<![CDATA[Table of contents]]>265C1C1119<![CDATA[IEEE Transactions on Neural Networks and Learning Systems publication information]]>265C2C2137<![CDATA[Self-Organizing Neural Networks Integrating Domain Knowledge and Reinforcement Learning]]>2658899022496<![CDATA[Variable Neural Adaptive Robust Control: A Switched System Approach]]>2659039152227<![CDATA[Integral Reinforcement Learning for Continuous-Time Input-Affine Nonlinear Systems With Simultaneous Invariant Explorations]]> -learning I, II are proposed, all of which generate the same convergent sequences as I-PI and IA-PI under the required excitation condition on the exploration. All the proposed methods are partially or completely model free, and can simultaneously explore the state space in a stable manner during the online learning processes. ISS, invariant admissibility, and convergence properties of the proposed methods are also investigated, and related with these, we show the design principles of the exploration for safe learning. Neural-network-based implementation methods for the proposed schemes are also presented in this paper. Finally, several numerical simulations are carried out to verify the effectiveness of the proposed methods.]]>2659169323775<![CDATA[Evolutionary Fuzzy ARTMAP Neural Networks for Classification of Semiconductor Defects]]>2659339504424<![CDATA[Graph Embedded Nonparametric Mutual Information for Supervised Dimensionality Reduction]]>2659519634776<![CDATA[The Minimum Risk Principle That Underlies the Criteria of Bounded Component Analysis]]>2659649812195<![CDATA[A One-Class Kernel Fisher Criterion for Outlier Detection]]>2659829941756<![CDATA[Semi-Supervised Nearest Mean Classification Through a Constrained Log-Likelihood]]>26599510062467<![CDATA[Adaptive NN Controller Design for a Class of Nonlinear MIMO Discrete-Time Systems]]> subsystems, and each subsystem contains unknown functions and external disturbance. Due to the complicated framework of the discrete-time systems, the existence of the dead zone and the noncausal problem in discrete-time, it brings about difficulties for controlling such a class of systems. To overcome the noncausal problem, by defining the coordinate transformations, the studied systems are transformed into a special form, which is suitable for the backstepping design. The radial basis functions NNs are utilized to approximate the unknown functions of the systems. The adaptation laws and the controllers are designed based on the transformed systems. By using the Lyapunov method, it is proved that the closed-loop system is stable in the sense that the semiglobally uniformly ultimately bounded of all the signals and the tracking errors converge to a bounded compact set. The simulation examples and the comparisons with previous approaches are provided to illustrate the effectiveness of the proposed control algorithm.]]>265100710182242<![CDATA[Transfer Learning for Visual Categorization: A Survey]]>265101910343183<![CDATA[Robust Sensorimotor Representation to Physical Interaction Changes in Humanoid Motion Learning]]>265103510472933<![CDATA[A New Method for Data Stream Mining Based on the Misclassification Error]]>265104810592643<![CDATA[Learning to Track Multiple Targets]]>265106010732548<![CDATA[Adaptive Neural Control of Nonlinear MIMO Systems With Time-Varying Output Constraints]]>265107410851891<![CDATA[Very Sparse LSSVM Reductions for Large-Scale Data]]> -norm-based reductions by iteratively sparsifying LSSVM and PFS-LSSVM models. The exact choice of the cardinality for the initial PV set is not important then as the final model is highly sparse. The proposed method overcomes the problem of memory constraints and high computational costs resulting in highly sparse reductions to LSSVM models. The approximations of the two models allow to scale the models to large-scale datasets. Experiments on real-world classification and regression data sets from the UCI repository illustrate that these approaches achieve sparse models without a significant tradeoff in errors.]]>265108610973182<![CDATA[Sparse Multivariate Gaussian Mixture Regression]]>265109811081656<![CDATA[Variational Inference With ARD Prior for NIRS Diffuse Optical Tomography]]>26511091114904<![CDATA[An Efficient Topological Distance-Based Tree Kernel]]>26511151120938<![CDATA[IEEE Computational Intelligence Society Information]]>265C3C3116<![CDATA[IEEE Transactions on Neural Networks information for authors]]>265C4C4124