Loading [MathJax]/extensions/MathMenu.js
Understanding the Distributions of Aggregation Layers in Deep Neural Networks | IEEE Journals & Magazine | IEEE Xplore

Understanding the Distributions of Aggregation Layers in Deep Neural Networks


Abstract:

The process of aggregation is ubiquitous in almost all the deep nets’ models. It functions as an important mechanism for consolidating deep features into a more compact r...Show More

Abstract:

The process of aggregation is ubiquitous in almost all the deep nets’ models. It functions as an important mechanism for consolidating deep features into a more compact representation while increasing the robustness to overfitting and providing spatial invariance in deep nets. In particular, the proximity of global aggregation layers to the output layers of DNNs means that aggregated features directly influence the performance of a deep net. A better understanding of this relationship can be obtained using information theoretic methods. However, this requires knowledge of the distributions of the activations of aggregation layers. To achieve this, we propose a novel mathematical formulation for analytically modeling the probability distributions of output values of layers involved with deep feature aggregation. An important outcome is our ability to analytically predict the Kullback–Leibler (KL)-divergence of output nodes in a DNN. We also experimentally verify our theoretical predictions against empirical observations across a broad range of different classification tasks and datasets.
Page(s): 5536 - 5550
Date of Publication: 05 October 2022

ISSN Information:

PubMed ID: 36197864

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.