Abstract:
Despite the many successful applications of deep learning models for multidimensional signal and image processing, most traditional neural networks process data represent...Show MoreMetadata
Abstract:
Despite the many successful applications of deep learning models for multidimensional signal and image processing, most traditional neural networks process data represented by (multidimensional) arrays of real numbers. The intercorrelation between feature channels is usually expected to be learned from the training data, requiring numerous parameters and careful training. In contrast, vector-valued neural networks (referred to as V-nets) are conceived to process arrays of vectors and naturally consider the intercorrelation between feature channels. Consequently, they usually have fewer parameters and often undergo more robust training than traditional neural networks. This article aims to present a broad framework for V-nets. In this context, hypercomplex-valued neural networks are regarded as vector-valued models with additional algebraic properties. Furthermore, this article explains the relationship between vector-valued and traditional neural networks. To be precise, a V-net can be obtained by placing restrictions on a real-valued model to consider the intercorrelation between feature channels. Finally, I show how V-nets, including hypercomplex-valued neural networks, can be implemented in current deep learning libraries as real-valued networks.
Published in: IEEE Signal Processing Magazine ( Volume: 41, Issue: 3, May 2024)