<![CDATA[ IEEE Transactions on Neural Networks and Learning Systems - Popular ]]>
http://ieeexplore.ieee.org
Popular Articles Alert for this Publication# 5962385 2017February <![CDATA[Extreme Learning Machine for Multilayer Perceptron]]>1 constraint. By doing so, it achieves more compact and meaningful feature representations than the original ELM; 2) by exploiting the advantages of ELM random feature mapping, the hierarchically encoded outputs are randomly projected before final decision making, which leads to a better generalization with faster learning speed; and 3) unlike the greedy layerwise training of deep learning (DL), the hidden layers of the proposed framework are trained in a forward manner. Once the previous layer is established, the weights of the current layer are fixed without fine-tuning. Therefore, it has much better learning efficiency than the DL. Extensive experiments on various widely used classification data sets show that the proposed algorithm achieves better and faster convergence than the existing state-of-the-art hierarchical learning methods. Furthermore, multiple applications in computer vision further confirm the generality and capability of the proposed learning scheme.]]>2748098213613<![CDATA[Deep Neural Network for Structural Prediction and Lane Detection in Traffic Scene]]>2836907034929<![CDATA[Deep Direct Reinforcement Learning for Financial Signal Representation and Trading]]>2836536642489<![CDATA[A Combined Adaptive Neural Network and Nonlinear Model Predictive Control for Multirate Networked Industrial Process Control]]>d is investigated using adaptive neural network (NN) control, and it is shown that the outputs of subsystems at device layer can track the decomposed setpoints. Then, the outputs and inputs of the device layer subsystems are sampled with sampling period T_{u} at operation layer to form the index prediction, which is used to predict the overall performance index at lower frequency. Radial basis function NN is utilized as the prediction function due to its approximation ability. Then, considering the dynamics of the overall closed-loop system, nonlinear model predictive control method is proposed to guarantee the system stability and compensate the network-induced delays and packet dropouts. Finally, a continuous stirred tank reactor system is given in the simulation part to demonstrate the effectiveness of the proposed method.]]>2724164251316<![CDATA[Railway Track Circuit Fault Diagnosis Using Recurrent Neural Networks]]>2835235332134<![CDATA[Why Deep Learning Works: A Manifold Disentanglement Perspective]]>2710199720083528<![CDATA[Dynamic Energy Management System for a Smart Microgrid]]>278164316563285<![CDATA[A Novel Twin Support-Vector Machine With Pinball Loss]]>2823593702821<![CDATA[A Locality-Constrained and Label Embedding Dictionary Learning Algorithm for Image Classification]]>2822782933018<![CDATA[Change Detection in Synthetic Aperture Radar Images Based on Deep Neural Networks]]>2711251384946<![CDATA[Distributed Recurrent Neural Networks for Cooperative Control of Manipulators: A Game-Theoretic Perspective]]>2824154263856<![CDATA[QRNN: $q$ -Generalized Random Neural Network]]>2823833902069<![CDATA[A Proposal for Local $k$ Values for $k$ -Nearest Neighbor Rule]]>282470475919<![CDATA[Stability Analysis of Neural Networks With Two Delay Components Based on Dynamic Delay Interval Method]]>2822592671304<![CDATA[Machine Learning Capabilities of a Simulated Cerebellum]]>2835105222607<![CDATA[Impulsive Effects and Stability Analysis on Memristive Neural Networks With Variable Delays]]>282476481725<![CDATA[Blind Image Quality Assessment via Deep Learning]]>266127512865674<![CDATA[Value and Policy Iterations in Optimal Control and Adaptive Dynamic Programming]]>283500509206<![CDATA[Short-Term Load and Wind Power Forecasting Using Neural Network-Based Prediction Intervals]]>2523033151440<![CDATA[Out-of-Sample Extensions for Non-Parametric Kernel Methods]]>2823343453123<![CDATA[Asynchronous Dissipative State Estimation for Stochastic Complex Networks With Quantized Jumping Coupling and Uncertain Measurements]]>2822682771137<![CDATA[Growing Echo-State Network With Multiple Subreservoirs]]>2823914042852<![CDATA[A Scoring Scheme for Online Feature Selection: Simulating Model Performance Without Retraining]]>2824054141997<![CDATA[Graph Theory-Based Pinning Synchronization of Stochastic Complex Dynamical Networks]]>2824274371619<![CDATA[A Graph-Embedding Approach to Hierarchical Visual Word Mergence]]>2823083203001<![CDATA[Extended Dissipative State Estimation for Markov Jump Neural Networks With Unreliable Links]]>2823463581685<![CDATA[LSTM: A Search Space Odyssey]]>PP991113132<![CDATA[Neural Network-Based DOBC for a Class of Nonlinear Systems With Unmatched Disturbances]]>282482489845<![CDATA[Nonparametric Density Estimation Based on Self-Organizing Incremental Neural Network for Large Noisy Data]]>2818172113<![CDATA[Hierarchical Change-Detection Tests]]>2822462582084<![CDATA[Adaptive Unsupervised Feature Selection With Structure Regularization]]>PP991132955<![CDATA[Experienced Gray Wolf Optimization Through Reinforcement Learning and Neural Networks]]>PP991143017<![CDATA[Deep Learning of Part-Based Representation of Data Using Sparse Autoencoders With Nonnegativity Constraints]]>2712248624985185<![CDATA[Cluster Synchronization on Multiple Nonlinearly Coupled Dynamical Subnetworks of Complex Networks With Nonidentical Nodes]]>2835705831348<![CDATA[Sequential Nonlinear Learning for Distributed Multiagent Systems via Extreme Learning Machines]]>2835465583357<![CDATA[Classification in the Presence of Label Noise: A Survey]]>2558458691806<![CDATA[A Spiking Neural Network System for Robust Sequence Recognition]]>2736216353883<![CDATA[Storage Free Smart Energy Management for Frequency Control in a Diesel-PV-Fuel Cell-Based Hybrid AC Microgrid]]>278165716713438<![CDATA[Infinite Horizon Self-Learning Optimal Control of Nonaffine Discrete-Time Nonlinear Systems]]>2648668792408<![CDATA[Air-Breathing Hypersonic Vehicle Tracking Control Based on Adaptive Dynamic Programming]]>2835845983578<![CDATA[Feature Combination via Clustering]]>PP991121793<![CDATA[Detecting Wash Trade in Financial Market Using Digraphs and Dynamic Programming]]>2711235123632319<![CDATA[Model-Based Reinforcement Learning for Infinite-Horizon Approximate Optimal Tracking]]>283753758450<![CDATA[Model-Based Adaptive Event-Triggered Control of Strict-Feedback Nonlinear Systems]]>PP991131776<![CDATA[3-D Laser-Based Multiclass and Multiview Object Detection in Cluttered Indoor Scenes]]>2811771908438<![CDATA[Finite-Time State Estimation for Recurrent Delayed Neural Networks With Component-Based Event-Triggering Protocol]]>PP991121193<![CDATA[<formula formulatype="inline"> <tex Notation="TeX">$L_{1/2}$</tex></formula> Regularization: A Thresholding Representation Theory and a Fast Solver]]>1/2 regularization has been recognized in recent studies on sparse modeling (particularly on compressed sensing). The L_{1/2} regularization, however, leads to a nonconvex, nonsmooth, and non-Lipschitz optimization problem that is difficult to solve fast and efficiently. In this paper, through developing a threshoding representation theory for L_{1/2} regularization, we propose an iterative half thresholding algorithm for fast solution of L_{1/2} regularization, corresponding to the well-known iterative soft thresholding algorithm for L_{1} regularization, and the iterative hard thresholding algorithm for L_{0} regularization. We prove the existence of the resolvent of gradient of ||x||_{1/2}^{1/2}, calculate its analytic expression, and establish an alternative feature theorem on solutions of L_{1/2} regularization, based on which a thresholding representation of solutions of L_{1/2} regularization is derived and an optimal regularization parameter setting rule is formulated. The developed theory provides a successful practice of extension of the well- known Moreau's proximity forward-backward splitting theory to the L_{1/2} regularization case. We verify the convergence of the iterative half thresholding algorithm and provide a series of experiments to assess performance of the algorithm. The experiments show that the half algorithm is effective, efficient, and can be accepted as a fast solver for L_{1/2} regularization. With the new algorithm, we conduct a phase diagram study to further demonstrate the superiority of L_{1/2} regularization over L_{1} regularization.]]>237101310271549<![CDATA[Two Machine Learning Approaches for Short-Term Wind Speed Time-Series Prediction]]>278173417471938<![CDATA[Iterative Adaptive Dynamic Programming for Solving Unknown Nonlinear Zero-Sum Game Based on Online Data]]>∞ control is a powerful method to solve the disturbance attenuation problems that occur in some control systems. The design of such controllers relies on solving the zero-sum game (ZSG). But in practical applications, the exact dynamics is mostly unknown. Identification of dynamics also produces errors that are detrimental to the control performance. To overcome this problem, an iterative adaptive dynamic programming algorithm is proposed in this paper to solve the continuous-time, unknown nonlinear ZSG with only online data. A model-free approach to the Hamilton-Jacobi-Isaacs equation is developed based on the policy iteration method. Control and disturbance policies and value are approximated by neural networks (NNs) under the critic-actor-disturber structure. The NN weights are solved by the least-squares method. According to the theoretical analysis, our algorithm is equivalent to a Gauss-Newton method solving an optimization problem, and it converges uniformly to the optimal solution. The online data can also be used repeatedly, which is highly efficient. Simulation results demonstrate its feasibility to solve the unknown nonlinear ZSG. When compared with other algorithms, it saves a significant amount of online measurement time.]]>2837147251508<![CDATA[Closed-Loop Modulation of the Pathological Disorders of the Basal Ganglia Network]]>2823713824033