Loading [a11y]/accessibility-menu.js
MIMo: A Multi-Modal Infant Model for Studying Cognitive Development in Humans and AIs | IEEE Conference Publication | IEEE Xplore

MIMo: A Multi-Modal Infant Model for Studying Cognitive Development in Humans and AIs


Abstract:

A central challenge in the early cognitive development of humans is making sense of the rich multimodal experiences originating from interactions with the physical world....Show More

Abstract:

A central challenge in the early cognitive development of humans is making sense of the rich multimodal experiences originating from interactions with the physical world. AIs that learn in an autonomous and open-ended fashion based on multimodal sensory input face a similar challenge. To study such development and learning in silico, we have created MIMo, a multimodal infant model. MIMo’s body is modeled after an 18-month-old child and features binocular vision, a vestibular system, proprioception, and touch perception through a full-body virtual skin. MIMo is an open-source research platform based on the MuJoCo physics engine for constructing computational models of human cognitive development as well as studying open-ended autonomous learning in AI. We describe the design and interfaces of MIMo and provide examples illustrating its use.
Date of Conference: 12-15 September 2022
Date Added to IEEE Xplore: 30 November 2022
ISBN Information:
Conference Location: London, United Kingdom

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.