Loading [MathJax]/extensions/MathMenu.js
Automatic Configuration of Deep Neural Networks with Parallel Efficient Global Optimization | IEEE Conference Publication | IEEE Xplore

Automatic Configuration of Deep Neural Networks with Parallel Efficient Global Optimization


Abstract:

Designing the architecture for an artificial neural network is a cumbersome task because of the numerous parameters to configure, including activation functions, layer ty...Show More

Abstract:

Designing the architecture for an artificial neural network is a cumbersome task because of the numerous parameters to configure, including activation functions, layer types, and hyper-parameters. With the large number of parameters for most networks nowadays, it is intractable to find a good configuration for a given task by hand. In this paper the Mixed Integer Parallel Efficient Global Optimization (MIP-EGO) algorithm is proposed to automatically configure convolutional neural network architectures. It is shown that on several image classification tasks this approach is able to find competitive network architectures in terms of prediction accuracy, compared to the best hand-crafted ones in literature, when using only a fraction of the number of training epochs. Moreover, instead of the standard sequential evaluation in EGO, several candidate architectures are proposed and evaluated in parallel, which reduces the execution overhead significantly and leads to an efficient automation for deep neural network design.
Date of Conference: 14-19 July 2019
Date Added to IEEE Xplore: 30 September 2019
ISBN Information:

ISSN Information:

Conference Location: Budapest, Hungary

Contact IEEE to Subscribe

References

References is not available for this document.