Loading [a11y]/accessibility-menu.js
Dynamic Network Structure: Doubly Stacking Broad Learning Systems With Residuals and Simpler Linear Model Transmission | IEEE Journals & Magazine | IEEE Xplore

Dynamic Network Structure: Doubly Stacking Broad Learning Systems With Residuals and Simpler Linear Model Transmission


Abstract:

While broad learning system (BLS) has demonstrated its distinctive performance with its solid theoretical foundation, strong generalization capability and fast learning s...Show More

Abstract:

While broad learning system (BLS) has demonstrated its distinctive performance with its solid theoretical foundation, strong generalization capability and fast learning speed, a relatively large network structure (i.e., a large number of enhancement nodes) is often required to assure satisfactory performance especially for challenging datasets, which may inevitably deteriorate its generalization capability due to overfitting phenomenon. In this study, by stacking several broad learning sub-systems, a doubly stacked broad learning system through residuals and simpler linear model transmission, called RST&BLS, is presented to enhance BLS performance in network size, generalization capability and learning speed. With the use of shared feature nodes and simpler linear models between stacked layers, the design methodology of RST&BLS is motivated by three facets: 1) analogous to human-like neural behaviors that some common neuron blocks are always activated to deal with the correlated problems, an enhanced ensemble of BLS sub-systems is resulted; 2) rather than a complicated model, human prefers a simple model (as a component of the final model); 3) extra overfitting-avoidance capability between shared feature nodes and the remaining hidden nodes from the second layer can be assured in theory. Except for performance advantage over the comparative methods, experimental results on twenty-one classification/regression datasets indicate the superiority of RST&BLS in terms of smaller network structure (i.e., fewer adjustable parameters), better generalization capability and fewer computational burdens.
Page(s): 1378 - 1395
Date of Publication: 16 February 2022
Electronic ISSN: 2471-285X

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.