Loading [a11y]/accessibility-menu.js
MU-MAE: Multimodal Masked Autoencoders-Based One-Shot Learning | IEEE Conference Publication | IEEE Xplore

MU-MAE: Multimodal Masked Autoencoders-Based One-Shot Learning


Abstract:

With the exponential growth of multimedia data, leveraging multimodal sensors presents a promising approach for improving accuracy in human activity recognition. Neverthe...Show More

Abstract:

With the exponential growth of multimedia data, leveraging multimodal sensors presents a promising approach for improving accuracy in human activity recognition. Nevertheless, accurately identifying these activities using both video data and wearable sensor data presents challenges due to the labor-intensive data annotation, and reliance on external pretrained models or additional data. To address these challenges, we introduce Multimodal Masked Autoencoders-Based One-Shot Learning (Mu-MAE). Mu-MAE integrates a multimodal masked autoencoder with a synchronized masking strategy tailored for wearable sensors. This masking strategy compels the networks to capture more meaningful spatiotemporal features, which enables effective self-supervised pretraining without the need for external data. Furthermore, Mu-MAE leverages the representation extracted from multimodal masked autoencoders as prior information input to a cross-attention multimodal fusion layer. This fusion layer emphasizes spatiotemporal features requiring attention across different modalities while highlighting differences from other classes, aiding in the classification of various classes in metric-based one-shot learning. Comprehensive evaluations on MMAct one-shot classification show that Mu-MAE outperforms all the evaluated approaches, achieving up to an 80.17% accuracy for five-way one-shot multimodal classification, without the use of additional data.
Date of Conference: 07-09 August 2024
Date Added to IEEE Xplore: 15 October 2024
ISBN Information:

ISSN Information:

Conference Location: San Jose, CA, USA

I. Introduction

Human Activity Recognition (HAR) plays a pivotal role in design and deployment of intelligent systems across various domains, ranging from healthcare and assistive technologies to smart homes and autonomous vehicles [1]-[4]. For in-stance, precise activity recognition can facilitate collaborative robots in assisting workers by delivering tools at the right moment [5].

Contact IEEE to Subscribe

References

References is not available for this document.