Abstract:
A central challenge in the early cognitive development of humans is making sense of the rich multimodal experiences originating from interactions with the physical world....Show MoreMetadata
Abstract:
A central challenge in the early cognitive development of humans is making sense of the rich multimodal experiences originating from interactions with the physical world. AIs that learn in an autonomous and open-ended fashion based on multimodal sensory input face a similar challenge. To study such development and learning in silico, we have created MIMo, a multimodal infant model. MIMo’s body is modeled after an 18-month-old child and features binocular vision, a vestibular system, proprioception, and touch perception through a full-body virtual skin. MIMo is an open-source research platform based on the MuJoCo physics engine for constructing computational models of human cognitive development as well as studying open-ended autonomous learning in AI. We describe the design and interfaces of MIMo and provide examples illustrating its use.
Date of Conference: 12-15 September 2022
Date Added to IEEE Xplore: 30 November 2022
ISBN Information: