Dexterous manipulation

Our hands are capable of lot more than static grasping. Dexterous manipulation, involving the precise regulation of the fingers, is a skilled activity that we use everyday. Such manipulation is a combination of predictive control strategies and feedback from multiple sensors such as vision, touch and proprioception. How do humans use potentially redundant information from multiple sensors? Using an experimental task, of compressing slender springs prone to buckling, we showed how humans combine feedback from multiple sensors in a context sensitive manner. Vision, the slowest of the available sensory modalities, was almost unused when tactile information was reliable. But, when the quality of tactile sensation was experimentally degraded, vision became the dominant mode of feedback despite having larger time delays than proprioception. We developed a reduced order model for this dynamic manipulation task, and found that the globally optimal strategy for sensor fusion resembled what humans use. This work led to a clinical tool for quantifying hand function, as well as a functional MRI study to identify the neural correlates of strength and dexterity. Fundamental questions remain on how the nervous system learns to control objects with many internal degrees of freedom, and whether techniques of bifurcation detection that are employed by engineers (such as in electrical power grids) have a role to play in neural and robotic control near stability boundaries.

Project number: 
5