<![CDATA[ IEEE Transactions on Neural Networks and Learning Systems - new TOC ]]>
http://ieeexplore.ieee.org
TOC Alert for Publication# 5962385 2020November 30<![CDATA[Table of contents]]>3112C1504092<![CDATA[IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS Publication Information]]>3112C2C290<![CDATA[Weakly Supervised Complets Ranking for Deep Image Quality Modeling]]>3112504150546493<![CDATA[DACH: Domain Adaptation Without Domain Information]]>3112505550673751<![CDATA[Learning Multiple Parameters for Kernel Collaborative Representation Classification]]>3112506850784041<![CDATA[PID Controller-Based Stochastic Optimization Acceleration for Deep Neural Networks]]>https://github.com/tensorboy/PIDOptimizer.]]>3112507950912002<![CDATA[Synchronization of Delayed Neural Networks via Integral-Based Event-Triggered Scheme]]>3112509251022756<![CDATA[MASK-RL: Multiagent Video Object Segmentation Framework Through Reinforcement Learning]]>3112510351154450<![CDATA[RNN for Perturbed Manipulability Optimization of Manipulators Based on a Distributed Scheme: A Game-Theoretic Perspective]]>3112511651261899<![CDATA[A Novel Connectivity-Preserving Control Design for Rendezvous Problem of Networked Uncertain Nonlinear Systems]]>3112512751371012<![CDATA[Monostability and Multistability for Almost-Periodic Solutions of Fractional-Order Neural Networks With Unsaturating Piecewise Linear Activation Functions]]>3112513851522021<![CDATA[Bi-Stream Pose-Guided Region Ensemble Network for Fingertip Localization From Stereo Images]]>3112515351655788<![CDATA[Adaptive Neural Network Backstepping Control of Fractional-Order Nonlinear Systems With Actuator Faults]]>3112516651772304<![CDATA[Entropy and Confidence-Based Undersampling Boosting Random Forests for Imbalanced Problems]]>3112517851913427<![CDATA[Multiclass Oblique Random Forests With Dual-Incremental Learning Capacity]]>3112519252032542<![CDATA[Subspace Distribution Adaptation Frameworks for Domain Adaptation]]>generalized covariate shift assumption and adapt the source distribution to the target distribution in a subspace by applying a distribution adaptation function. Accordingly, we propose two frameworks: Bregman-divergence-embedded structural risk minimization (BSRM) and joint structural risk minimization (JSRM). In the proposed frameworks, the subspace distribution adaptation function and the target prediction model are jointly learned. Under certain instantiations, convex optimization problems are derived from both frameworks. Experimental results on the synthetic and real-world text and image data sets show that the proposed methods outperform the state-of-the-art domain adaptation techniques with statistical significance.]]>3112520452183106<![CDATA[Learning Salient and Discriminative Descriptor for Palmprint Feature Extraction and Identification]]>a priori knowledge and cannot adapt well to different palmprint recognition scenarios, including contact-based, contactless, and multispectral palmprint recognition. This problem limits the application and popularization of palmprint recognition. In this article, motivated by the least square regression, we propose a salient and discriminative descriptor learning method (SDDLM) for general scenario palmprint recognition. Different from the conventional palmprint feature extraction methods, the SDDLM jointly learns noise and salient information from the pixels of palmprint images, simultaneously. The learned noise enforces the projection matrix to learn salient and discriminative features from each palmprint sample. Thus, the SDDLM can be adaptive to multiscenarios. Experiments were conducted on the IITD, CASIA, GPDS, PolyU near infrared (NIR), noisy IITD, and noisy GPDS palmprint databases, and palm vein and dorsal hand vein databases. It can be seen from the experimental results that the proposed SDDLM consistently outperformed the classical palmprint recognition methods and state-of-the-art methods for palmprint recognition.]]>3112521952303796<![CDATA[Recent Advances on Dynamical Behaviors of Coupled Neural Networks With and Without Reaction–Diffusion Terms]]>3112523152441609<![CDATA[Optimal Elevator Group Control via Deep Asynchronous Actor–Critic Learning]]>3112524552563952<![CDATA[A Brain-Inspired Framework for Evolutionary Artificial General Intelligence]]>http://www.feagi.org.]]>3112525752712480<![CDATA[An Accelerated Finite-Time Convergent Neural Network for Visual Servoing of a Flexible Surgical Endoscope With Physical and RCM Constraints]]>3112527252842772<![CDATA[Circular Complex-Valued GMDH-Type Neural Network for Real-Valued Classification Problems]]>3112528552993210<![CDATA[Unsupervised AER Object Recognition Based on Multiscale Spatio-Temporal Features and Spiking Neurons]]>3112530053112912<![CDATA[Distributed Complementary Binary Quantization for Joint Hash Table Learning]]>3112531253232273<![CDATA[Hybrid Deep Learning-Gaussian Process Network for Pedestrian Lane Detection in Unstructured Scenes]]>3112532453383431<![CDATA[Design and Comprehensive Analysis of a Noise-Tolerant ZNN Model With Limited-Time Convergence for Time-Dependent Nonlinear Minimization]]>3112533953481441<![CDATA[Why ResNet Works? Residuals Generalize]]>$mathcal O(1 / sqrt {N})$ margin-based multiclass generalization bound is obtained for ResNet, as an exemplary case of any deep neural network with residual connections. Generalization guarantees for similar state-of-the-art neural network architectures, such as DenseNet and ResNeXt, are straightforward. According to the obtained generalization bound, we should introduce regularization terms to control the magnitude of the norms of weight matrices not to increase too much, in practice, to ensure a good generalization ability, which justifies the technique of weight decay.]]>311253495362992<![CDATA[Multiple Convolutional Recurrent Neural Networks for Fault Identification and Performance Degradation Evaluation of High-Speed Train Bogie]]>3112536353763450<![CDATA[Beyond Expectation: Deep Joint Mean and Quantile Regression for Spatiotemporal Problems]]>3112537753892386<![CDATA[Adaptive Tracking Control of State Constraint Systems Based on Differential Neural Networks: A Barrier Lyapunov Function Approach]]>3112539054012071<![CDATA[Low-Rank Tensor Train Coefficient Array Estimation for Tensor-on-Tensor Regression]]>$ell _{2} $ constraint is imposed to avoid overfitting. The hierarchical alternating least square is used to solve the optimization problem. Numerical experiments on a synthetic data set and two real-life data sets demonstrate that the proposed method outperforms the state-of-the-art methods in terms of prediction accuracy with comparable computational complexity, and the proposed method is more computationally efficient when the data are high dimensional with small size in each mode.]]>3112540254112101<![CDATA[Cross-Modal Attention With Semantic Consistence for Image–Text Matching]]>3112541254255082<![CDATA[Ensemble Stochastic Configuration Networks for Estimating Prediction Intervals: A Simultaneous Robust Training Algorithm and Its Application]]>3112542654402910<![CDATA[Safe Intermittent Reinforcement Learning With Static and Dynamic Event Generators]]>3112544154552458<![CDATA[Stochastic Finite-Time H<sub>∞</sub> State Estimation for Discrete-Time Semi-Markovian Jump Neural Networks With Time-Varying Delays]]>$H_infty $ state estimation problem is addressed for a class of discrete-time neural networks with semi-Markovian jump parameters and time-varying delays. The focus is mainly on the design of a state estimator such that the constructed error system is stochastically finite-time bounded with a prescribed $H_infty $ performance level via finite-time Lyapunov stability theory. By constructing a delay-product-type Lyapunov functional, in which the information of time-varying delays and characteristics of activation functions are fully taken into account, and using the Jensen summation inequality, the free weighting matrix approach, and the extended reciprocally convex matrix inequality, some sufficient conditions are established in terms of linear matrix inequalities to ensure the existence of the state estimator. Finally, numerical examples with simulation results are provided to illustrate the effectiveness of our proposed results.]]>3112545654671165<![CDATA[Learning Deep Gradient Descent Optimization for Image Deconvolution]]>3112546854826220<![CDATA[Synchronization of Coupled Time-Delay Neural Networks With Mode-Dependent Average Dwell Time Switching]]>3112548354961417<![CDATA[A Hybrid-Learning Algorithm for Online Dynamic State Estimation in Multimachine Power Systems]]>3112549755083155<![CDATA[Deep Subspace Clustering]]>3112550955212767<![CDATA[Adaptive Optimal Control for Stochastic Multiplayer Differential Games Using On-Policy and Off-Policy Reinforcement Learning]]>3112552255331170<![CDATA[An Online Event-Triggered Near-Optimal Controller for Nash Solution in Interconnected System]]>3112553455481520<![CDATA[From Discriminant to Complete: Reinforcement Searching-Agent Learning for Weakly Supervised Object Detection]]>3112554955606370<![CDATA[MBA: Mini-Batch AUC Optimization]]>3112556155742590<![CDATA[Pairwise Constraint Propagation With Dual Adversarial Manifold Regularization]]>3112557555872550<![CDATA[Heterogeneous Domain Adaptation: An Unsupervised Approach]]>3112558856022348<![CDATA[A Universal Approximation Result for Difference of Log-Sum-Exp Neural Networks]]>3112560356121403<![CDATA[Generalized Convolution Spectral Mixture for Multitask Gaussian Processes]]>3112561356233796<![CDATA[Revisiting <italic>L</italic><sub>2,1</sub>-Norm Robustness With Vector Outlier Regularization]]>$L_{2,1}$ -norm function as a robust loss/error function. However, the robustness of the $L_{2,1}$ -norm function is not well understood so far. In this brief, we propose a new vector outlier regularization (VOR) framework to understand and analyze the robustness of the $L_{2,1}$ -norm function. Our VOR function defines a data point to be the outlier if it is outside a threshold with respect to a theoretical prediction, and regularizes it, i.e., pull it back to the threshold line. Thus, in the VOR function, how far an outlier lies away from its theoretical predicted value does not affect the final regularization and analysis results. One important aspect of the VOR function is that it has an equivalent continuous formulation, based on which we can prove that the $L_{2,1}$ -norm function is the limiting case of the proposed VOR function. Based on this theoretical result, we thus provide a new and intuitive explanation for the robustness property of the $L_{2,1}$ -norm function. As an example, we use the VOR function to matrix factorization and propose a VOR principal component analysis (PCA) (VORPCA). We show some benefits of VORPCA on data reconstruction and clustering tasks.]]>3112562456292244<![CDATA[Transferable Linear Discriminant Analysis]]>3112563056381713<![CDATA[Parameter Selection for Linear Support Vector Regression]]>et al. (2015), an effective parameter-selection procedure by using warm-start techniques to solve a sequence of optimization problems has been proposed for linear classification. We extend their techniques to linear SVR, but address some new and challenging issues. In particular, linear classification involves only the regularization parameter, but linear SVR has an extra error sensitivity parameter. We investigate the effective range of each parameter and the sequence in checking the two parameters. Based on this work, an effective tool for the selection of parameters for linear SVR has been available for public use.]]>311256395644788<![CDATA[Image-Based Model Parameter Optimization Using Model-Assisted Generative Adversarial Networks]]>3112564556501318<![CDATA[IEEE Computational Intelligence Society Information]]>3112C3C3171<![CDATA[IEEE Transactions on Neural Networks and Learning Systems Information for Authors]]>3112C4C4205