Unveiling Statistical Significance of Online Regression Over Multiple Datasets | IEEE Conference Publication | IEEE Xplore

Unveiling Statistical Significance of Online Regression Over Multiple Datasets


Abstract:

Despite extensive focus on techniques for evaluating the performance of two learning algorithms on a single dataset, the critical challenge of developing statistical test...Show More

Abstract:

Despite extensive focus on techniques for evaluating the performance of two learning algorithms on a single dataset, the critical challenge of developing statistical tests to compare multiple algorithms across various datasets has been largely overlooked in most machine learning research. Additionally, in the realm of Online Learning, ensuring statistical significance is essential to validate continuous learning processes, particularly for achieving rapid convergence and effectively managing concept drifts in a timely manner. Robust statistical methods are needed to assess the significance of performance differences as data evolves over time. This article examines the state-of-the-art online regression models and empirically evaluates several suitable tests. To compare multiple online regression models across various datasets, we employed the Friedman test along with corresponding post-hoc tests. For thorough evaluations, utilizing both real and synthetic datasets with 5-fold cross-validation and seed averaging ensures comprehensive assessment across various data subsets. Our tests generally confirmed the performance of competitive baselines as consistent with their individual reports. However, some statistical test results also indicate that there is still room for improvement in certain aspects of state-of-the-art methods.
Date of Conference: 07-09 August 2024
Date Added to IEEE Xplore: 15 October 2024
ISBN Information:

ISSN Information:

Conference Location: San Jose, CA, USA

Contact IEEE to Subscribe

References

References is not available for this document.