<![CDATA[ IEEE Transactions on Neural Networks and Learning Systems - new TOC ]]>
http://ieeexplore.ieee.org
TOC Alert for Publication# 5962385 2022May 26<![CDATA[Table of Contents]]>335C1181988<![CDATA[IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS Publication Information]]>335C2C2128<![CDATA[Editorial Biologically Learned/Inspired Methods for Sensing, Control, and Decision]]>3351820182481<![CDATA[Bio-Motivated Two-Level Event-Triggered Controller for Nonlinear Systems]]>335182518321034<![CDATA[Continuous Online Adaptation of Bioinspired Adaptive Neuroendocrine Control for Autonomous Walking Robots]]>3351833184510152<![CDATA[Spiking Adaptive Dynamic Programming Based on Poisson Process for Discrete-Time Nonlinear Systems]]>335184618562587<![CDATA[Octopus-Inspired Microgripper for Deformation-Controlled Biological Sample Manipulation]]>$4~boldsymbol{mu }text{m}$ . Adaptive robust control provides fast and accuracy response in point-to-point deformation testing, and the average responding time is less than 30 s and the average error is no larger than 1 pixel.]]>335185718662007<![CDATA[Multi-Output Selective Ensemble Identification of Nonlinear and Nonstationary Industrial Processes]]>335186718802422<![CDATA[Asymptotic Tracking Control for Uncertain MIMO Systems: A Biologically Inspired ESN Approach]]>335188118901733<![CDATA[Distributed Adaptive Fault-Tolerant Time-Varying Formation Control of Unmanned Airships With Limited Communication Ranges Against Input Saturation for Smart City Observation]]>335189119043302<![CDATA[Event-Triggered ADP for Nonzero-Sum Games of Unknown Nonlinear Systems]]>335190519131813<![CDATA[Bio-Inspired Dynamic Collective Choice in Large-Population Systems: A Robust Mean-Field Game Perspective]]>$H_{infty }$ tracking problem by taking the population behavior as a fixed trajectory, and then establish a mean-field system to estimate the population behavior. Optimal control strategies and worst disturbances, independent of the population size, are designed, which give a way to realize the collective decision-making behavior emerged in biological systems. We further prove that the designed strategies constitute $epsilon _{N}$ -Nash equilibrium, where $epsilon _{N}$ goes toward zero as the number of agents increases to infinity. The effectiveness of the proposed results are illustrated through two simulation examples.]]>335191419242645<![CDATA[Triple-Memory Networks: A Brain-Inspired Method for Continual Learning]]>335192519344177<![CDATA[Robust Transcoding Sensory Information With Neural Spikes]]>335193519463822<![CDATA[Rectified Linear Postsynaptic Potential Function for Backpropagation in Deep Spiking Neural Networks]]>335194719582848<![CDATA[An Event-Based Digital Time Difference Encoder Model Implementation for Neuromorphic Systems]]>335195919734778<![CDATA[Chain-Structure Echo State Network With Stochastic Optimization: Methodology and Application]]>335197419852347<![CDATA[Event-Driven Intrinsic Plasticity for Spiking Convolutional Neural Networks]]>335198619951798<![CDATA[How Frequency Injection Locking Can Train Oscillatory Neural Networks to Compute in Phase]]>335199620094987<![CDATA[Memory Recall: A Simple Neural Network Training Framework Against Catastrophic Forgetting]]>335201020221835<![CDATA[Unsupervised Estimation of Monocular Depth and VO in Dynamic Environments via Hybrid Masks]]>335202320333156<![CDATA[Multisample Online Learning for Probabilistic Spiking Neural Networks]]>335203420441248<![CDATA[Deep Reinforcement Learning With Modulated Hebbian Plus Q-Network Architecture]]>335204520564389<![CDATA[Training Deep Neural Network for Optimal Power Allocation in Islanded Microgrid Systems: A Distributed Learning-Based Approach]]>335205720698886<![CDATA[Recognizing Missing Electromyography Signal by Data Split Reorganization Strategy and Weight-Based Multiple Neural Network Voting Method]]>335207020793122<![CDATA[A Multiobjective Evolutionary Nonlinear Ensemble Learning With Evolutionary Feature Selection for Silicon Prediction in Blast Furnace]]>335208020938489<![CDATA[A Brain-Inspired Approach for Collision-Free Movement Planning in the Small Operational Space]]>335209421056068<![CDATA[Memristive Circuit Implementation of a Self-Repairing Network Based on Biological Astrocytes in Robot Application]]>335210621205628<![CDATA[Memristor-Based Edge Computing of Blaze Block for Image Recognition]]>$V_{t}$ of the memristor; thus, the circuit is more stable. Experiments show that the proposed memristor-based circuit achieves an accuracy of 84.38% on the CIFAR-10 data set with advantages in computing resources, calculation time, and power consumption. Experiments also show that, when the number of multistate conductance is 2^{8} and the quantization bit of the data is 8, the circuit can achieve its best balance between power consumption and production cost.]]>335212121313270<![CDATA[Target Convergence Analysis of Cancer-Inspired Swarms for Early Disease Diagnosis and Targeted Collective Therapy]]>335213221463466<![CDATA[Toward Cognitive Navigation: Design and Implementation of a Biologically Inspired Head Direction Cell Network]]>335214721582929<![CDATA[An MVMD-CCA Recognition Algorithm in SSVEP-Based BCI and Its Application in Robot Control]]>335215921672316<![CDATA[Brain-Inspired Experience Reinforcement Model for Bin Packing in Varying Environments]]>335216821802037<![CDATA[Robust Facial Landmark Detection by Multiorder Multiconstraint Deep Networks]]>3352181219410128<![CDATA[Hierarchical Representation Learning in Graph Neural Networks With Node Decimation Pooling]]>MAXCUT solution. Afterward, the selected nodes are connected with Kron reduction to form the coarsened graph. Finally, since the resulting graph is very dense, we apply a sparsification procedure that prunes the adjacency matrix of the coarsened graph to reduce the computational cost in the GNN. Notably, we show that it is possible to remove many edges without significantly altering the graph structure. Experimental results show that NDP is more efficient compared to state-of-the-art graph pooling operators while reaching, at the same time, competitive performance on a significant variety of graph classification tasks.]]>335219522071711<![CDATA[SymNet: A Simple Symmetric Positive Definite Manifold Deep Learning Method for Image Set Classification]]>2principal component analysis (PCA) technique is utilized to learn the multistage connection weights without requiring complicated computations, thus making it be built and trained easier. On the tail of SymNet, the kernel discriminant analysis (KDA) algorithm is coupled with the output vectorized feature representations to perform discriminative subspace learning. Extensive experiments and comparisons with state-of-the-art methods on six typical visual classification tasks demonstrate the feasibility and validity of the proposed SymNet.]]>335220822222770<![CDATA[An Off-Policy Trust Region Policy Optimization Method With Monotonic Improvement Guarantee for Deep Reinforcement Learning]]>335222322351871<![CDATA[Contrastive Adversarial Domain Adaptation Networks for Speaker Recognition]]>335223622451880<![CDATA[A Neuromorphic CMOS Circuit With Self-Repairing Capability]]>$mu text{m}^{2}$ silicon area and its power consumption is about $65.4~mu text{W}$ . This neuromorphic fault-tolerant circuit can be considered as a key candidate for future silicon neuronal systems and implementation of neurorobotic and neuro-inspired circuits.]]>335224622585007<![CDATA[StructADMM: Achieving Ultrahigh Efficiency in Structured Pruning for DNNs]]>$2.58times $ and $3.65times $ average measured speedup on two GPUs, clearly outperforming the prior work. The average speedups reach $3.15times $ and $8.52times $ when allowing a moderate accuracy loss of 2%. In this case, the model compression fo-
convolutional layers is $15.0times $ , corresponding to $11.93times $ measured CPU speedup. As another example, for the ResNet-18 model on the CIFAR-10 data set, we achieve an unprecedented $54.2times $ structured pruning rate on CONV layers. This is $32times $ higher pruning rate compared with recent work and can further translate into $7.6times $ inference time speedup on the Adreno 640 mobile GPU compared with the original, unpruned DNN model. We share our codes and models at the link http://bit.ly/2M0V7DO.]]>335225922733435<![CDATA[Unsupervised Feature Selection With Extended OLSDA via Embedding Nonnegative Manifold Structure]]>$ell _{2,1}$ regularization is imposed to ensure that the projection matrix is row sparse for efficient feature selection and proved to be equivalent to $ell _{2,0}$ regularization. Finally, extensive experiments on nine benchmark data sets are conducted to demonstrate the effectiveness of the proposed approach.]]>335227422801088<![CDATA[IEEE Computational Intelligence Society Information]]>335C3C3137<![CDATA[IEEE Transactions on Neural Networks and Learning Systems Information for Authors]]>335C4C4208