Accessibility navigation


Judging size, distance and depth with an active telepresence system

Plooy, A. M., Brooker, J. P., Wann, J. P. and Sharkey, P. M. (2000) Judging size, distance and depth with an active telepresence system. In: IS&T/SPIE Electronic Imaging: Steroscopic displays & virtual reality systems VIII, 22 Jan 2001, San Jose, California, USA, pp. 244-252.

Full text not archived in this repository.

Official URL: http://dx.doi.org/10.1117/12.384449

Abstract/Summary

A visual telepresence system has been developed at the University of Reading which utilizes eye tracing to adjust the horizontal orientation of the cameras and display system according to the convergence state of the operator's eyes. Slaving the cameras to the operator's direction of gaze enables the object of interest to be centered on the displays. The advantage of this is that the camera field of view may be decreased to maximize the achievable depth resolution. An active camera system requires an active display system if appropriate binocular cues are to be preserved. For some applications, which critically depend upon the veridical perception of the object's location and dimensions, it is imperative that the contribution of binocular cues to these judgements be ascertained because they are directly influenced by camera and display geometry. Using the active telepresence system, we investigated the contribution of ocular convergence information to judgements of size, distance and shape. Participants performed an open- loop reach and grasp of the virtual object under reduced cue conditions where the orientation of the cameras and the displays were either matched or unmatched. Inappropriate convergence information produced weak perceptual distortions and caused problems in fusing the images.

Item Type:Conference or Workshop Item (Paper)
Refereed:Yes
Divisions:Faculty of Science > School of Systems Engineering
ID Code:19119

Centaur Editors: Update this record

Page navigation