Vicarious body maps bridge vision and touch in the human brain

[thumbnail of Open Access]
Preview
Text (Open Access)
- Published Version
· Available under License Creative Commons Attribution.
[thumbnail of NATURE_MS_SUBMISSION_NH_TK.docx]
Text
- Accepted Version
· Restricted to Repository staff only
· The Copyright of this document has not been checked yet. This may affect its availability.

Please see our End User Agreement.

It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.

Add to AnyAdd to TwitterAdd to FacebookAdd to LinkedinAdd to PinterestAdd to Email

Hedger, N. ORCID: https://orcid.org/0000-0002-2733-1913, Naselaris, T., Kay, K. and Knapen, T. (2025) Vicarious body maps bridge vision and touch in the human brain. Nature. ISSN 0028-0836 doi: 10.1038/s41586-025-09796-0

Abstract/Summary

Our sensory systems work together to generate a cohesive experience of the world around us. Watching others being touched activates brain areas representing our own sense of touch: the visual system recruits touch-related computations to simulate bodily consequences of visual inputs1. One long-standing question is how the brain implements this interface between visual and somatosensory representations 2. To address this question, we developed a model to simultaneously map somatosensory body part tuning and visual field tuning throughout the brain. Applying our model to ongoing co-activations during rest resulted in detailed maps of the body part tuning in the brain’s endogenous somatotopic network. During movie-watching, somatotopic tuning explains responses throughout the entire dorsolateral visual system, revealing an array of somatotopic body maps that tile the cortical surface. The body-position tuning of these maps aligns with visual tuning, predicting both preferences for visual field locations and visual-category preferences for body parts. These results reveal a mode of brain organization in which aligned visual-somatosensory topographic maps connect visual and bodily reference frames. This cross-modal interface is ideally situated to translate raw sensory impressions into more abstract formats useful for action, social cognition, and semantic processing

Altmetric Badge

Item Type Article
URI https://centaur.reading.ac.uk/id/eprint/127242
Identification Number/DOI 10.1038/s41586-025-09796-0
Refereed Yes
Divisions Interdisciplinary Research Centres (IDRCs) > Centre for Integrative Neuroscience and Neurodynamics (CINN)
Life Sciences > School of Psychology and Clinical Language Sciences > Department of Psychology
Life Sciences > School of Psychology and Clinical Language Sciences > Neuroscience
Publisher Nature Publishing Group
Download/View statistics View download statistics for this item

Downloads

Downloads per month over past year

University Staff: Request a correction | Centaur Editors: Update this record