Loading [MathJax]/extensions/MathMenu.js
Learning Neural Network Classifiers by Distributing Nearest Neighbors on Adaptive Hypersphere | IEEE Journals & Magazine | IEEE Xplore

Learning Neural Network Classifiers by Distributing Nearest Neighbors on Adaptive Hypersphere


Impact Statement:Pair-wise constraint-based methods constitute a commonly utilized optimization framework for tuning the parameters within neural networks. However, due to the unscalable ...Show More

Abstract:

In this study, the adaptive hypersphere nearest neighbors (ASNN) method is proposed as an optimization framework to enhance the generalization performance of neural netwo...Show More
Impact Statement:
Pair-wise constraint-based methods constitute a commonly utilized optimization framework for tuning the parameters within neural networks. However, due to the unscalable embedding space and the inefficient pairing, these methods may encounter premature convergence or divergence problems, which deteriorate the performance of neural networks. The ASNN method we propose in this study deals with these problems. The efficacy of the ASNN is evaluated across 29 machine learning datasets and 3 image recognition datasets, demonstrating its superior performance over its competitors in a majority of these datasets. This superiority is further validated through rigorous statistical analysis. The proposed ASNN introduces a versatile application spectrum in various neural network training scenarios, and offers an alternative optimization framework for improving the generalization performance of neural networks.

Abstract:

In this study, the adaptive hypersphere nearest neighbors (ASNN) method is proposed as an optimization framework to enhance the generalization performance of neural network classifiers. In terms of the classification task, the neural network draws decision boundaries by constructing the discriminative features of samples. To learn those features, attributed to the flexibility and separability, the pair-wise constraint-based methods that consist of the pair-wise loss and an embedding space (e.g., hypersphere space) have gained considerable attention over the last decade. Despite their success, pair-wise constraint-based methods still suffer from premature convergence or divergence problems, driven by two main challenges. 1) The poor scalability of the embedding space constrains the variety of the distribution of embedded samples, thereby increasing the optimization difficulty. 2) It is hard to select suitable positive/negative pairs during the training. In order to address the aforement...
Published in: IEEE Transactions on Artificial Intelligence ( Volume: 6, Issue: 1, January 2025)
Page(s): 234 - 249
Date of Publication: 10 October 2024
Electronic ISSN: 2691-4581

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.