Processing math: 100%
AMT-: Acoustic Multi-Target Tracking With Smartphone MIMO System | IEEE Journals & Magazine | IEEE Xplore

AMT^++: Acoustic Multi-Target Tracking With Smartphone MIMO System


Abstract:

Acoustic target tracking has shown great advantages for device-free human-machine interaction over vision/RF-based mechanisms. However, existing approaches for portable d...Show More

Abstract:

Acoustic target tracking has shown great advantages for device-free human-machine interaction over vision/RF-based mechanisms. However, existing approaches for portable devices solely track a single target, incapable of the ubiquitous and highly challenging multi-target situations such as double-hand multimedia controlling and multi-player gaming. In this paper, we propose AMT^+, a pioneering smartphone MIMO system to achieve centimeter-level multi-target tracking. The challenge of multi-target occlusion is effectively addressed by employing multiple speaker-microphone pairs. However, the unique challenge raised by MIMO is the superposition of multi-source signals due to the cross-correlation among speakers. Initially, we tackle this challenge by designing a weak cross-correlation signal to reduce interference passively. In AMT^+, we’ve further integrated self-interference cancellation for active minimize interference. The most distinguishing advantage of AMT^+ lies in the elimination of the raised multipath effect, which is commonly ignored in previous work by hastily assuming targets as particles. AMT^+ employs Doppler filtering over delay subtraction for echo suppression. Further, by non-particle target reflections modeling results, we introduce a distance-projection-based method for continuous target identification and tracking. Implemented on commercial smartphones, AMT^+ achieves on average 0.54 cm, 1.37 cm, and 2.13 cm errors for single, double, and triple target tracking respectively, and on average 97.0% classification accuracy for 14 controlling gestures.
Published in: IEEE Transactions on Mobile Computing ( Volume: 23, Issue: 12, December 2024)
Page(s): 12650 - 12665
Date of Publication: 21 June 2024

ISSN Information:

Funding Agency:


I. Introduction

Device-free target tracking and gesture recognition have attracted increasing attention for the convenience of human-machine interaction. Target tracking based on acoustic signals is preferred over vision-based solutions with lighting requirements and RF-based ones with coarse-grained resolution and lower accuracy. However, existing acoustic approaches based on portable devices, e.g., smartphones, only track a single target, which therefore exclusively forbids the movement of any other objects except the aimed single target.

Contact IEEE to Subscribe

References

References is not available for this document.