SoundingActions: Learning How Actions Sound from Narrated Egocentric Videos | IEEE Conference Publication | IEEE Xplore

SoundingActions: Learning How Actions Sound from Narrated Egocentric Videos


Abstract:

We propose a novel self-supervised embedding to learn how actions sound from narrated in-the-wild egocentric videos. Whereas existing methods rely on curated data with kn...Show More

Abstract:

We propose a novel self-supervised embedding to learn how actions sound from narrated in-the-wild egocentric videos. Whereas existing methods rely on curated data with known audio-visual correspondence, our multimodal contrastive-consensus coding (M C3) embedding reinforces the associations between audio, language, and vision when all modality pairs agree, while diminishing those associations when anyone pair does not. We show our approach can successfully discover how the long tail of human actions sound from egocentric video, outperforming an array of recent multimodal embedding techniques on two datasets (Eg04D and EPIC-Sounds) and multiple cross-modal tasks.
Date of Conference: 16-22 June 2024
Date Added to IEEE Xplore: 16 September 2024
ISBN Information:

ISSN Information:

Conference Location: Seattle, WA, USA

Contact IEEE to Subscribe

References

References is not available for this document.