No single, stable 3D representation can explain pointing biases in a spatial updating taskVuong, J., Fitzgibbon, A. W. and Glennerster, A. ORCID: https://orcid.org/0000-0002-8674-2763 (2019) No single, stable 3D representation can explain pointing biases in a spatial updating task. Scientific Reports, 9 (1). 12578. ISSN 2045-2322
It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing. To link to this item DOI: 10.1038/s41598-019-48379-8 Abstract/SummaryPeople are able to keep track of objects as they navigate through space, even when objects are out of sight. This requires some kind of representation of the scene and of the observer's location but the form this might take is debated. We tested the accuracy and reliability of observers' estimates of the visual direction of previously-viewed targets. Participants viewed 4 objects from one location, with binocular vision and small head movements then, without any further sight of the targets, they walked to another location and pointed towards them. All conditions were tested in an immersive virtual environment and some were also carried out in a real scene. Participants made large, consistent pointing errors that are poorly explained by any stable 3D representation. Any explanation based on a 3D representation would have to posit a different layout of the remembered scene depending on the orientation of the obscuring wall at the moment the participant points. Our data show that the mechanisms for updating visual direction of unseen targets are not based on a stable 3D model of the scene, even a distorted one.
Download Statistics DownloadsDownloads per month over past year Altmetric Funded Project Deposit Details University Staff: Request a correction | Centaur Editors: Update this record |