Accessibility navigation


Integration of motion and form cues for the perception of self-motion in the human brain

Kuai, S.-G., Shan, Z.-K.-D., Chen, J., Xu, Z.-X., Li, J.-M., Field, D. T. ORCID: https://orcid.org/0000-0003-4041-8404 and Li, L. (2020) Integration of motion and form cues for the perception of self-motion in the human brain. The Journal of Neuroscience, 40 (5). pp. 1120-1132. ISSN 1529-2401

[img]
Preview
Text - Accepted Version
· Please see our End User Agreement before downloading.

1MB
[img]
Preview
Image (Figure 1) - Other
· Please see our End User Agreement before downloading.

293kB
[img]
Preview
Image (Figure 2) - Other
· Please see our End User Agreement before downloading.

1MB

It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.

To link to this item DOI: 10.1523/JNEUROSCI.3225-18.2019

Abstract/Summary

When moving around in the world, the human visual system uses both motion and form information to estimate the direction of self-motion (i.e., heading). However, little is known about cortical areas in charge of this task. This brain-imaging study addressed this question by using visual stimuli consisting of randomly distributed dot pairs oriented toward a locus on a screen (the form-defined focus of expansion (FoE)) but moved away from a different locus (the motion-defined FoE) to simulate observer translation. We first fixed the motion-defined FoE location and shifted the form-defined FoE location. We then made the locations of the motion- and the form-defined FoEs either congruent (at the same location in the display) or incongruent (on the opposite sides of the display). The motion- or the form-defined FoE shift was the same in the two types of stimuli but the perceived heading direction shifted for the congruent but not the incongruent stimuli. Participants (both sexes) made a task-irrelevant (contrast discrimination) judgment during scanning. Searchlight and region-of-interest based multi-voxel pattern analysis (MVPA) revealed that early visual areas V1, V2, and V3 responded to either the motion- or the form-defined FoE shift. After V3, only the dorsal areas V3a and V3B/KO responded to such shifts. Furthermore, area V3B/KO shows a highly significant higher decoding accuracy for the congruent than the incongruent stimuli. Our results provide direct evidence showing area V3B/KO does not simply respond to motion and form cues but integrate these two cues for the perception of heading. Human survival relies on accurate perception of self-motion. The visual system uses both motion (optic flow) and form cues for the perception of the direction of self-motion (heading). Although human brain areas for processing optic flow and form structure are well identified, the areas responsible for integrating these two cues for the perception of self-motion remain unknown. We conducted fMRI experiments and used MVPA analysis technique to find human brain areas that can decode the shift in heading specified by each cue alone and the two cues combined. We found that motion and form information are first processed in the early visual areas and then are likely integrated in the higher dorsal area V3B/KO for the final estimation of heading.

Item Type:Article
Refereed:Yes
Divisions:Life Sciences > School of Psychology and Clinical Language Sciences > Department of Psychology
ID Code:88279
Publisher:The Society for Neuroscience

Downloads

Downloads per month over past year

University Staff: Request a correction | Centaur Editors: Update this record

Page navigation