View-based modelling of human visual navigation errorsPickup, L., Fitzgibbon, A., Gilson, S. and Glennerster, A. ORCID: https://orcid.org/0000-0002-8674-2763 (2011) View-based modelling of human visual navigation errors. In: IEEE 10th IVMSP Workshop, 16-17 June 2011, Ithaca, NY, pp. 135-140, https://doi.org/10.1109/IVMSPW.2011.5970368.
It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing. To link to this item DOI: 10.1109/IVMSPW.2011.5970368 Abstract/SummaryView-based and Cartesian representations provide rival accounts of visual navigation in humans, and here we explore possible models for the view-based case. A visual “homing” experiment was undertaken by human participants in immersive virtual reality. The distributions of end-point errors on the ground plane differed significantly in shape and extent depending on visual landmark configuration and relative goal location. A model based on simple visual cues captures important characteristics of these distributions. Augmenting visual features to include 3D elements such as stereo and motion parallax result in a set of models that describe the data accurately, demonstrating the effectiveness of a view-based approach.
Download Statistics DownloadsDownloads per month over past year Altmetric Deposit Details University Staff: Request a correction | Centaur Editors: Update this record |