1. INTRODUCTION
Facial image manipulation using GAN architectures has gained popularity [1], [2], [3], [4], and is being increasingly used in a wide variety of real-world application scenarios [5]. The diffusion of manipulated images poses a serious threat to public trust, and many efforts have been made to distinguish fake images from real ones. However, in many cases, knowing that a face image is fake without a solid proof of why and where it has been manipulated is not sufficient. As an example, Figure 1 shows some real images reconstructed and partially corrupted by a StyleGAN2 architecture [2], [3], [6]. All images represent the same identity and though all of them have been manipulated, the goal of the manipulation is different for all the images, with a different facial expression artificially injected in all of them.