Abstract:
Capturing animal locomotion in the wild is far more challenging than in controlled laboratory settings. Wildlife subjects move unpredictably, and issues, such as scaling,...Show MoreMetadata
Abstract:
Capturing animal locomotion in the wild is far more challenging than in controlled laboratory settings. Wildlife subjects move unpredictably, and issues, such as scaling, occlusion, lighting changes, and the lack of ground truth data, make motion capture difficult. Unlike human biomechanics, where machine learning thrives with annotated datasets, such resources are scarce for wildlife. Multimodal sensing offers a solution by combining the strengths of various sensors, such as Light Detection and Ranging {LiDAR) and thermal cameras, to compensate for individual sensor limitations. In addition, some sensors, like LiDAR, can provide training data for monocular pose estimation models. We introduce a multimodal sensor system (M2S2) for capturing animal motion in the wild. M2S2 integrates RGB, depth, thermal, event, LiDAR, and acoustic sensors to overcome challenges like synchronization and calibration. We showcase its application with data from cheetahs, offering a new resource for advancing sensor fusion algorithms in wildlife motion capture.
Published in: IEEE Sensors Letters ( Volume: 9, Issue: 4, April 2025)