Speaker
Description
A growing body of evidence suggests that when we interact physically with our environments our brains form models of the deterministic connection between our actions and the ensuing sensory information. Theories of motor learning posit that the formation of internal models is a key mechanism though which the brain forms predictions about the outcomes of actions, overcoming certain limitations of the biological feedback system. Consistent with these theories, experiments with human-robot interactions have demonstrated the ability of the brain to capture the difference between random and deterministic forces. I will review some of these earlier studies and fill focus then on a family of human-machine interfaces that create a many-to-one mapping between body motions and movements of an external controlled object. In this context, the user learns to control the external object by forming an inverse model of the interface mapping. I will describe this learning process as a state-based dynamical system and will discuss how machine learning may cooperate with human learning to facilitate the acquisition of motor skills and their recovery after injury to the nervous system.