<![CDATA[ IEEE Transactions on Neural Networks and Learning Systems - new TOC ]]>
http://ieeexplore.ieee.org
TOC Alert for Publication# 5962385 2018August 13<![CDATA[Table of contents]]>298C13307125<![CDATA[IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS publication information]]>298C2C2151<![CDATA[Subspace Clustering of Categorical and Numerical Data With an Unknown Number of Clusters]]>298330833252461<![CDATA[Analysis and Control of Output Synchronization in Directed and Undirected Complex Dynamical Networks]]>298332633381672<![CDATA[Distributed Optimal Consensus Control for Nonlinear Multiagent System With Unknown Dynamic]]>298333933481243<![CDATA[Spiking Neural P Systems With Polarizations]]>298334933601489<![CDATA[Organizational Data Classification Based on the <italic>Importance</italic> Concept of Complex Networks]]>298336133732099<![CDATA[Fast Kronecker Product Kernel Methods via Generalized Vec Trick]]>298337433871861<![CDATA[Robust Adaptive Embedded Label Propagation With Weight Learning for Inductive Classification]]>298338834033018<![CDATA[Model Approximation for Switched Genetic Regulatory Networks]]>298340434175362<![CDATA[Synchronizing Neural Networks With Proportional Delays Based on a Class of <inline-formula><tex-math notation="LaTeX">$q$</tex-math></inline-formula>-Type Allowable Time Scales]]>29834183428822<![CDATA[Dimensionality Reduction Using Similarity-Induced Embeddings]]>298342934418141<![CDATA[Structured Weak Semantic Space Construction for Visual Categorization]]>298344234511527<![CDATA[Hybrid Fuzzy Wavelet Neural Networks Architecture Based on Polynomial Neural Networks and Fuzzy Set/Relation Inference-Based Wavelet Neurons]]>298345234624375<![CDATA[Multiview Privileged Support Vector Machines]]>298346334772728<![CDATA[Prescribed Performance Control of Uncertain Euler–Lagrange Systems Subject to Full-State Constraints]]>1 smooth; 2) the full-state tracking error converges to a prescribed compact set around origin within a given finite time at a controllable rate of convergence that can be uniformly prespecified; 3) with Nussbaum gain in the loop, the tracking error further shrinks to zero as t → ∞; and 4) the neural network (NN) unit can be safely included in the loop during the entire system operational envelope without the danger of violating the compact set precondition imposed on the NN training inputs. Furthermore, by using the Lyapunov analysis, it is proven that all the signals of the closed-loop systems are semiglobally uniformly ultimately bounded. The effectiveness and benefits of the proposed control method are validated via computer simulation.]]>298347834891671<![CDATA[Avoiding Congestion in Cluster Consensus of the Second-Order Nonlinear Multiagent Systems]]>298349034983099<![CDATA[Adaptive Unknown Input Estimation by Sliding Modes and Differential Neural Network Observer]]>298349935091395<![CDATA[A Novel Consistent Random Forest Framework: Bernoulli Random Forests]]>298351035232288<![CDATA[Rank-<inline-formula><tex-math notation="LaTeX">$k$</tex-math></inline-formula> 2-D Multinomial Logistic Regression for Matrix Data Classification]]>298352435372627<![CDATA[Adaptive Consensus Control of Nonlinear Multiagent Systems With Unknown Control Directions Under Stochastic Topologies]]>29835383547990<![CDATA[Adaboost-LLP: A Boosting Method for Learning With Label Proportions]]>298354835593683<![CDATA[Prototype-Incorporated Emotional Neural Network]]>298356035722427<![CDATA[Cost-Sensitive Learning of Deep Feature Representations From Imbalanced Data]]>298357335873780<![CDATA[Event-Triggered Asynchronous Guaranteed Cost Control for Markov Jump Discrete-Time Neural Networks With Distributed Delay and Channel Fading]]>298358835981806<![CDATA[Event-Based Impulsive Control of Continuous-Time Dynamic Systems and Its Application to Synchronization of Memristive Neural Networks]]>298359936091396<![CDATA[On Selecting Effective Patterns for Fast Support Vector Regression Training]]>298361036222654<![CDATA[Sophisticated Merging Over Random Partitions: A Scalable and Robust Causal Discovery Approach]]>298362336351905<![CDATA[Using Directional Fibers to Locate Fixed Points of Recurrent Neural Networks]]>298363636461515<![CDATA[Learning Low-Rank Decomposition for Pan-Sharpening With Spatial-Spectral Offsets]]>298364736572654<![CDATA[Adaptive Neural Output-Feedback Control for a Class of Nonlower Triangular Nonlinear Systems With Unmodeled Dynamics]]>298365836681101<![CDATA[Boundary Control of 2-D Burgers’ PDE: An Adaptive Dynamic Programming Approach]]>298366936811742<![CDATA[Global Exponential Stability of Impulsive Fuzzy High-Order BAM Neural Networks With Continuously Distributed Delays]]>2983682370021968<![CDATA[Beyond Pairwise Matching: Person Reidentification via High-Order Relevance Learning]]>298370137144170<![CDATA[Learning Semantic-Aligned Action Representation]]>298371537251392<![CDATA[Event-Triggered <inline-formula><tex-math notation="LaTeX">$H_infty$</tex-math></inline-formula> State Estimation for Delayed Stochastic Memristive Neural Networks With Missing Measurements: The Discrete Time Case]]>H_{∞} state estimation problem is investigated for a class of discrete-time stochastic memristive neural networks (DSMNNs) with time-varying delays and missing measurements. The DSMNN is subject to both the additive deterministic disturbances and the multiplicative stochastic noises. The missing measurements are governed by a sequence of random variables obeying the Bernoulli distribution. For the purpose of energy saving, an event-triggered communication scheme is used for DSMNNs to determine whether the measurement output is transmitted to the estimator or not. The problem addressed is to design an event-triggered H_{∞} estimator such that the dynamics of the estimation error is exponentially mean-square stable and the prespecified H_{∞} disturbance rejection attenuation level is also guaranteed. By utilizing a Lyapunov-Krasovskii functional and stochastic analysis techniques, sufficient conditions are derived to guarantee the existence of the desired estimator, and then, the estimator gains are characterized in terms of the solution to certain matrix inequalities. Finally, a numerical example is used to demonstrate the usefulness of the proposed event-triggered state estimation scheme.]]>298372637371169<![CDATA[Exploiting Spatio-Temporal Structure With Recurrent Winner-Take-All Networks]]>298373837461499<![CDATA[Adaptive Approximation-Based Regulation Control for a Class of Uncertain Nonlinear Systems Without Feedback Linearizability]]>298374737601619<![CDATA[Finite-Time Synchronization of Discontinuous Neural Networks With Delays and Mismatched Parameters]]>298376137711797<![CDATA[Efficient Online Learning Algorithms Based on LSTM Neural Networks]]>298377237831375<![CDATA[Credit Card Fraud Detection: A Realistic Modeling and a Novel Learning Strategy]]>298378437972597<![CDATA[Jointly Learning Structured Analysis Discriminative Dictionary and Analysis Multiclass Classifier]]>2,1-norm constraint on the coding coefficients instead of using l_{0} or l_{1} norm, since the l_{0} or l_{1}-norm constraint applied in most existing DL criteria makes the training phase time consuming. The code-extraction projection that bridges data with the sparse codes by extracting special features from the given samples is calculated via minimizing a sparse code approximation term. Then we compute a linear classifier based on the approximated sparse codes by an analysis mechanism to simultaneously consider the classification and representation powers. Thus, the classification approach of our model is very efficient, because it can avoid the extra time-consuming sparse reconstruction process with trained dictionary for each new test data as most existing DL algorithms. Simulations on real image databases demonstrate that our ADDL model can obtain superior performance over other state of the arts.]]>298379838144890<![CDATA[Dissipativity and Synchronization of Generalized BAM Neural Networks With Multivariate Discontinuous Activations]]>2983815382718059<![CDATA[Haze Removal Using Radial Basis Function Networks for Visibility Restoration Applications]]>298382838384354<![CDATA[Online Recorded Data-Based Composite Neural Control of Strict-Feedback Systems With Application to Hypersonic Flight Dynamics]]>298383938491970<![CDATA[Optimized Backstepping for Tracking Control of Strict-Feedback Systems]]>298385038621612<![CDATA[Convex Formulation for Kernel PCA and Its Use in Semisupervised Learning]]>298386338691305<![CDATA[ADMM-Based Algorithm for Training Fault Tolerant RBF Networks and Selecting Centers]]>298387038782953<![CDATA[Augmented Lagrange Programming Neural Network for Localization Using Time-Difference-of-Arrival Measurements]]>29838793884846<![CDATA[Deterministic Convergence for Learning Control Systems Over Iteration-Dependent Tracking Intervals]]>29838853892540<![CDATA[Stability and Guaranteed Cost Analysis of Time-Triggered Boolean Networks]]>29838933899359<![CDATA[LANN-SVD: A Non-Iterative SVD-Based Learning Algorithm for One-Layer Neural Networks]]>29839003905851<![CDATA[Partial-Nodes-Based State Estimation for Complex Networks With Unbounded Distributed Delays]]>fraction of the network nodes. Such fraction of the nodes is determined by either the practical availability or the computational necessity. The PNB state estimator is designed such that the error dynamics of the network state estimation is exponentially ultimately bounded in the presence of measurement errors. Sufficient conditions are established to ensure the existence of the PNB state estimators and then the explicit expression of the gain matrices of such estimators is characterized. When the network measurements are free of noises, the main results specialize to the case of exponential stability for error dynamics. Numerical examples are presented to verify the theoretical results.]]>29839063912859<![CDATA[Self-Weighted Supervised Discriminative Feature Selection]]>298391339181093<![CDATA[Deep Sparse Tensor Filtering Network for Synthetic Aperture Radar Images Classification]]>298391939241655<![CDATA[IEEE Computational Intelligence Society Information]]>298C3C377<![CDATA[IEEE Transactions on Neural Networks information for authors]]>298C4C4134