<![CDATA[ IEEE Transactions on Neural Networks and Learning Systems - new TOC ]]>
http://ieeexplore.ieee.org
TOC Alert for Publication# 5962385 2016December 08<![CDATA[Table of contents]]>2712C12457124<![CDATA[IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS publication information]]>2712C2C2199<![CDATA[Training Radial Basis Function Neural Networks for Classification via Class-Specific Clustering]]>2712245824714700<![CDATA[Similarity Constraints-Based Structured Output Regression Machine: An Approach to Image Super-Resolution]]>2712247224855985<![CDATA[Deep Learning of Part-Based Representation of Data Using Sparse Autoencoders With Nonnegativity Constraints]]>2712248624985185<![CDATA[A Unified Framework for Representation-Based Subspace Clustering of Out-of-Sample and Large-Scale Data]]>2-norm-based representation, and have achieved the state-of-the-art performance. However, these methods have suffered from the following two limitations. First, the time complexities of these methods are at least proportional to the cube of the data size, which make those methods inefficient for solving the large-scale problems. Second, they cannot cope with the out-of-sample data that are not used to construct the similarity graph. To cluster each out-of-sample datum, the methods have to recalculate the similarity graph and the cluster membership of the whole data set. In this paper, we propose a unified framework that makes the representation-based subspace clustering algorithms feasible to cluster both the out-of-sample and the large-scale data. Under our framework, the large-scale problem is tackled by converting it as the out-of-sample problem in the manner of sampling, clustering, coding, and classifying. Furthermore, we give an estimation for the error bounds by treating each subspace as a point in a hyperspace. Extensive experimental results on various benchmark data sets show that our methods outperform several recently proposed scalable methods in clustering a large-scale data set.]]>2712249925123331<![CDATA[A Theoretical Foundation of Goal Representation Heuristic Dynamic Programming]]>2712251325252499<![CDATA[Sequential Compact Code Learning for Unsupervised Image Hashing]]>2712252625363409<![CDATA[Organizing Books and Authors by Multilayer SOM]]>2712253725504482<![CDATA[Generalized Higher Order Orthogonal Iteration for Tensor Learning and Decomposition]]>N + NRI) log(√I^{N})) observations to reliably recover an Nth-order I × I ×⋯× I tensor of n-rank (r, r, .. ., r), compared with O(rI^{N-1}) observations required by those tensor TNM methods (I ≫ R ≥ r). Extensive experimental results show that CTNM is usually more accurate than them, and is orders of magnitude faster.]]>2712255125632530<![CDATA[Dynamic Learning From Neural Control for Strict-Feedback Systems With Guaranteed Predefined Performance]]>2712256425762567<![CDATA[Online Solution of Two-Player Zero-Sum Games for Continuous-Time Nonlinear Systems With Completely Unknown Dynamics]]>271225772587942<![CDATA[Shortcomings/Limitations of Blockwise Granger Causality and Advances of Blockwise New Causality]]>2712258826012476<![CDATA[Semisupervised Multiclass Classification Problems With Scarcity of Labeled Data: A Theoretical Study]]>2712260226141649<![CDATA[Integration-Enhanced Zhang Neural Network for Real-Time-Varying Matrix Inversion in the Presence of Various Kinds of Noises]]>2712261526272831<![CDATA[Scalable Linear Visual Feature Learning via Online Parallel Nonnegative Matrix Factorization]]>2712262826423425<![CDATA[Information Theoretic Subspace Clustering]]>2,1-norm, ℓ_{α}-norm, and correntropy) can be justified from the viewpoint of halfquadratic (HQ) optimization, which facilitates both convergence study and algorithmic development. In particular, a general formulation is accordingly proposed to unify HQ-based group sparsity methods into a common framework. In algorithmic part, we develop information theoretic subspace clustering methods via correntropy. With the help of Parzen window estimation, correntropy is used to handle either outliers under any distributions or sample-specific errors in data. Pairwise link constraints are further treated as a prior structure of LRRs. Based on the HQ framework, iterative algorithms are developed to solve the nonconvex information theoretic loss functions. Experimental results on three benchmark databases show that our methods can further improve the robustness of LRR subspace clustering and outperform other state-of-the-art subspace clustering methods.]]>2712264326552224<![CDATA[Adaptive Scaling of Cluster Boundaries for Large-Scale Social Media Data Clustering]]>2712265626693053<![CDATA[K-MEAP: Multiple Exemplars Affinity Propagation With Specified $K$ Clusters]]>2712267026825576<![CDATA[Landslide Displacement Prediction With Uncertainty Based on Neural Networks With Random Hidden Weights]]>2712268326954298<![CDATA[Impulsive Synchronization of Reaction–Diffusion Neural Networks With Mixed Delays and Its Application to Image Encryption]]>2712269627107774<![CDATA[MSDLSR: Margin Scalable Discriminative Least Squares Regression for Multicategory Classification]]>2-support vector machine. Based on this fact, we further provide a theorem on the margin of DLSR. With this theorem, we add an explicit constraint on DLSR to restrict the number of zeros of dragging values, so as to control the margin of DLSR. The new model is called MSDLSR. Theoretically, we analyze the determination of the margin and support vectors of MSDLSR. Extensive experiments illustrate that our method outperforms the current state-of-the-art approaches on various machine leaning and real-world data sets.]]>2712271127171169<![CDATA[Data-Driven Modeling for UGI Gasification Processes via an Enhanced Genetic BP Neural Network With Link Switches]]>2712271827292129<![CDATA[Is a Complex-Valued Stepsize Advantageous in Complex-Valued Gradient Learning Algorithms?]]>2712273027351679<![CDATA[Enhanced Logical Stochastic Resonance in Synthetic Genetic Networks]]>2712273627392193<![CDATA[A Boosting Approach to Exploit Instance Correlations for Multi-Instance Classification]]>p-norm is integrated to localize the witness instances and formulate the bag scores from classifier outputs. The contributions are twofold. First, a flexible and concise model for Boosting is proposed by the L_{p}-norm localization and exponential loss optimization. The scores for bag-level classification are directly fused from the instance feature space without probabilistic assumptions. Second, gradient and Newton descent optimizations are applied to derive the weak learners for Boosting. In particular, the instance correlations are exploited by fitting the weights and Newton updates for the weak learner construction. The final Boosted classifiers are the sums of iteratively chosen weak learners. Experiments demonstrate that the proposed L_{p}-norm-localized Boosting approach significantly improves the MI classification performance. Compared with the state of the art, the approach achieves the highest MI classification accuracy on 7/10 benchmark data sets.]]>2712274027471148<![CDATA[Using Digital Masks to Enhance the Bandwidth Tolerance and Improve the Performance of On-Chip Reservoir Computing Systems]]>2712274827531742<![CDATA[Synchronization Analysis and Design of Coupled Boolean Networks Based on Periodic Switching Sequences]]>t + 1)th step at most, where T_{t} is the transient period of the leader BN.]]>271227542759192<![CDATA[Power Quality Analysis Using a Hybrid Model of the Fuzzy Min–Max Neural Network and Clustering Tree]]>2712276027671566<![CDATA[Max-Margin-Based Discriminative Feature Learning]]>2712276827751558<![CDATA[IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS Special Section on Deep Reinforcement Learning and Adaptive Dynamic Programming]]>2712277627761810<![CDATA[IEEE Computational Intelligence Society Information]]>2712C3C3214<![CDATA[IEEE Transactions on Neural Networks information for authors]]>2712C4C476