I. Introduction
The rapid diffusion and progress of generative models, such as diffusion models (DMs) and generative adversarial networks (GANs) has led to the widespread use of generated images. This has caused the unauthorized dissemination of synthetic images across social networks. In response to the threats posed by forged images, significant research efforts have been made in the field of detecting forgeries [1]. Besides determining whether an image is real or fake (synthetic image detection), understanding the provenance (origin) of an image, referred to as synthetic image attribution, also plays an important role. Several methods have been proposed for model-level attribution, that rely on the artefacts or signatures (fingerprints) left by the models in the images they generate [2]. Recent works have started addressing the attribution task in a different manner, to attribute the synthetic images to the source architecture that generated them, instead of the specific model [3]. Such an approach overcomes a limitation of model attribution in real-world applications, where model-level granularity is often not needed and even undesired (e.g. if an attacker steals a copyrighted GAN, and modifies the weights by fine-tuning on a different dataset, model-level attribution is going to fail).