The Role of Somatosensory Models in Vocal Autonomous Exploration (Trabajo presentado en Innovation Match Mx 2016).
The present work focuses on two main objectives. Firstly, it highlights the relevance of studying the early stages of language development using machines as an approach to contribute to the future of speech recognizers and synthesizers, user interfaces, active learning techniques, and to the field of robotics and artificial intelligence in general. Secondly, this work introduces some results on the study of the role of somatosensory models in vocal autonomous exploration. In previous works, the roles of intrinsic motivations and motor constraints in early vocal development were studied showing that active learning techniques can be used by artificial agents endowed with a simulated vocal tract to autonomously learn how to produce intended sounds through the use of probabilistic models. This work studies the effects of modifying the somatosensory model, which is used to map motor commands to undesired articulatory configurations, over the intrinsically motivated active learning process. The somatosensory system is modeled as a Gaussian Mixture Model. Herein, some simulations were run varying the structure of the model in order to analyze differences in the results. The effects on the explored sensorimotor regions and the amount of undesired vocal configurations are studied. The simulations presented in this work show that the structure of the current somatosensory model is relevant to the learning process. However, it can be also concluded that in order to reliably characterize the effects of modifying the somatosensory model further simulations must be performed and clear measures for performance should be considered.