Abstract:
We introduce a novel zero-shot learning (ZSL) method, known as ‘self-alignment training’, and use it to train a vanilla autoencoder which is then evaluated on four promin...Show MoreMetadata
Abstract:
We introduce a novel zero-shot learning (ZSL) method, known as ‘self-alignment training’, and use it to train a vanilla autoencoder which is then evaluated on four prominent ZSL Tasks CUB, SUN, AWA1&2. Despite being a far simpler model than the competition, our method achieved results on par with SOTA. In addition, we also present a novel ‘contrastive-loss’ objective to allow autoencoders to learn from negative samples. In particular, we achieve new SOTA of 64.5 on AWA2 for Generalised ZSL and a new SOTA for standard ZSL of 47.7 on SUN. The code is publicly accessible on https://github.com/Wluper/satae.
Date of Conference: 12-14 December 2022
Date Added to IEEE Xplore: 23 March 2023
ISBN Information: