1. Introduction
The unprecedented success of machine learning (ML), and particularly deep learning, gives rise to new risks and threats. A notable emerging threats is adversarial examples [1]. This attack allows an adversary to design minor, seemingly unnoticeable perturbations, which when corrupting an input to an ML model have a notable effect on its output. The last decade has witnessed an ongoing arms-race between new sophisticated adversarial attacks and the development of countermeasures aiming to mitigate sensitivity of ML models [2, 3].