Loading [MathJax]/extensions/MathMenu.js
Non-adversarial Robustness of Deep Learning Methods for Computer Vision | IEEE Conference Publication | IEEE Xplore
Scheduled Maintenance: On Tuesday, 8 April, IEEE Xplore will undergo scheduled maintenance from 1:00-5:00 PM ET (1800-2200 UTC). During this time, there may be intermittent impact on performance. We apologize for any inconvenience.

Non-adversarial Robustness of Deep Learning Methods for Computer Vision


Abstract:

Non-adversarial robustness, also known as natural robustness, is a property of deep learning models that enables them to maintain performance even when faced with distrib...Show More

Abstract:

Non-adversarial robustness, also known as natural robustness, is a property of deep learning models that enables them to maintain performance even when faced with distribution shifts caused by natural variations in data. However, achieving this property is challenging because it is difficult to predict in advance the types of distribution shifts that may occur. To address this challenge, researchers have proposed various approaches, some of which anticipate potential distribution shifts, while others utilize knowledge about the shifts that have already occurred to enhance model generalizability. In this paper, we present a brief overview of the most recent techniques for improving the robustness of computer vision methods, as well as a summary of commonly used robustness benchmark datasets for evaluating the model’s performance under data distribution shifts. Finally, we examine the strengths and limitations of the approaches reviewed and identify general trends in deep learning robustness improvement for computer vision.
Date of Conference: 05-08 June 2023
Date Added to IEEE Xplore: 27 July 2023
ISBN Information:
Conference Location: East Sarajevo, Bosnia and Herzegovina

Contact IEEE to Subscribe

References

References is not available for this document.