Loading [MathJax]/extensions/MathMenu.js
Learning the world from its words: Anchor-agnostic Transformers for Fingerprint-based Indoor Localization | IEEE Conference Publication | IEEE Xplore

Learning the world from its words: Anchor-agnostic Transformers for Fingerprint-based Indoor Localization


Abstract:

In this paper, we propose Anchor-agnostic Transformers (AaTs) that can exploit the attention mechanism for Received Signal Strength (RSS) based fingerprinting localizatio...Show More

Abstract:

In this paper, we propose Anchor-agnostic Transformers (AaTs) that can exploit the attention mechanism for Received Signal Strength (RSS) based fingerprinting localization. In real-world applications, the RSS modality is inherently well-known for its extreme sensitivity to dynamic environments. Since most machine learning algorithms applied to the RSS modality do not possess any attention mechanism, they can only capture superficial representations, yet subtle but distinct ones characterizing specific locations, thereby leading to significant degradation in the testing phase. In contrast, AaTs are enabled to focus exclusively on relevant anchors at every Received Signal Strength (RSS) sequence for these subtle but distinct representations. This also facilitates the model to neglect redundant clues formed by noisy ambient conditions, thus achieving better accuracy in fingerprinting localization. Moreover, explicitly resolving collapse problems at the feature level (i.e., none-informative or homogeneous features) can further invigorate the self-attention process, by which subtle but distinct representations to specific locations are radically captured with ease. To this end, we enhance our proposed model with two sub-constraints, namely covariance and variance losses that are mediated with the main task within the representation learning stage towards a novel multi-task learning manner. To evaluate our AaTs, we compare the models with the state-of-the-art (SoTA) methods on three benchmark indoor localization datasets. The experimental results confirm our hypothesis and show that our proposed models could provide much higher accuracy.
Date of Conference: 13-17 March 2023
Date Added to IEEE Xplore: 18 April 2023
ISBN Information:

ISSN Information:

Conference Location: Atlanta, GA, USA

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.