Limanowski, J;
Friston, K;
(2020)
Active inference under visuo-proprioceptive conflict: Simulation and empirical results.
Scientific Reports
, 10
(1)
, Article 4010. 10.1038/s41598-020-61097-w.
Preview |
Text
s41598-020-61097-w.pdf - Published Version Download (2MB) | Preview |
Abstract
It has been suggested that the brain controls hand movements via internal models that rely on visual and proprioceptive cues about the state of the hand. In active inference formulations of such models, the relative influence of each modality on action and perception is determined by how precise (reliable) it is expected to be. The 'top-down' affordance of expected precision to a particular sensory modality is associated with attention. Here, we asked whether increasing attention to (i.e., the precision of) vision or proprioception would enhance performance in a hand-target phase matching task, in which visual and proprioceptive cues about hand posture were incongruent. We show that in a simple simulated agent-based on predictive coding formulations of active inference-increasing the expected precision of vision or proprioception improved task performance (target matching with the seen or felt hand, respectively) under visuo-proprioceptive conflict. Moreover, we show that this formulation captured the behaviour and self-reported attentional allocation of human participants performing the same task in a virtual reality environment. Together, our results show that selective attention can balance the impact of (conflicting) visual and proprioceptive cues on action-rendering attention a key mechanism for a flexible body representation for action.
Archive Staff Only
View Item |