Accessibility navigation

Judging size, distance and depth with an active telepresence system

Plooy, A. M., Brooker, J. P., Wann, J. P. and Sharkey, P. M. (2000) Judging size, distance and depth with an active telepresence system. In: IS&T/SPIE Electronic Imaging: Steroscopic displays & virtual reality systems VIII, 22 Jan 2001, San Jose, California, USA, pp. 244-252.

Full text not archived in this repository.

It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.

Official URL:


A visual telepresence system has been developed at the University of Reading which utilizes eye tracing to adjust the horizontal orientation of the cameras and display system according to the convergence state of the operator's eyes. Slaving the cameras to the operator's direction of gaze enables the object of interest to be centered on the displays. The advantage of this is that the camera field of view may be decreased to maximize the achievable depth resolution. An active camera system requires an active display system if appropriate binocular cues are to be preserved. For some applications, which critically depend upon the veridical perception of the object's location and dimensions, it is imperative that the contribution of binocular cues to these judgements be ascertained because they are directly influenced by camera and display geometry. Using the active telepresence system, we investigated the contribution of ocular convergence information to judgements of size, distance and shape. Participants performed an open- loop reach and grasp of the virtual object under reduced cue conditions where the orientation of the cameras and the displays were either matched or unmatched. Inappropriate convergence information produced weak perceptual distortions and caused problems in fusing the images.

Item Type:Conference or Workshop Item (Paper)
ID Code:19119

University Staff: Request a correction | Centaur Editors: Update this record

Page navigation