View-based approaches to spatial representation in human visionGlennerster, A. ORCID: https://orcid.org/0000-0002-8674-2763, Hansard, M. E. and Fitzgibbon, A. W. (2009) View-based approaches to spatial representation in human vision. In: Statistical and geometrical approaches to visual motion analysis. Lecture Notes in Computer Science, 5604. Springer, Berlin, pp. 193-208. ISBN 9783642030604
It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing. To link to this item DOI: 10.1007/978-3-642-03061-1_10 Abstract/SummaryIn an immersive virtual environment, observers fail to notice the expansion of a room around them and consequently make gross errors when comparing the size of objects. This result is difficult to explain if the visual system continuously generates a 3-D model of the scene based on known baseline information from interocular separation or proprioception as the observer walks. An alternative is that observers use view-based methods to guide their actions and to represent the spatial layout of the scene. In this case, they may have an expectation of the images they will receive but be insensitive to the rate at which images arrive as they walk. We describe the way in which the eye movement strategy of animals simplifies motion processing if their goal is to move towards a desired image and discuss dorsal and ventral stream processing of moving images in that context. Although many questions about view-based approaches to scene representation remain unanswered, the solutions are likely to be highly relevant to understanding biological 3-D vision.
Download Statistics DownloadsDownloads per month over past year Altmetric Deposit Details University Staff: Request a correction | Centaur Editors: Update this record |