Accessibility navigation


Scene understanding for auto-calibration of surveillance cameras

Teixeira, L., Maffra, F. and Badii, A. (2014) Scene understanding for auto-calibration of surveillance cameras. In: Advances in Visual Computing. Springer International Publishing, pp. 671-682.

Full text not archived in this repository.

It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.

To link to this item DOI: 10.1007/978-3-319-14364-4_65

Abstract/Summary

In the last decade, several research results have presented formulations for the auto-calibration problem. Most of these have relied on the evaluation of vanishing points to extract the camera parameters. Normally vanishing points are evaluated using pedestrians or the Manhattan World assumption i.e. it is assumed that the scene is necessarily composed of orthogonal planar surfaces. In this work, we present a robust framework for auto-calibration, with improved results and generalisability for real-life situations. This framework is capable of handling problems such as occlusions and the presence of unexpected objects in the scene. In our tests, we compare our formulation with the state-of-the-art in auto-calibration using pedestrians and Manhattan World-based assumptions. This paper reports on the experiments conducted using publicly available datasets; the results have shown that our formulation represents an improvement over the state-of-the-art.

Item Type:Book or Report Section
Refereed:Yes
Divisions:Faculty of Science > School of Mathematical, Physical and Computational Sciences > Department of Computer Science
ID Code:39835
Publisher:Springer International Publishing

University Staff: Request a correction | Centaur Editors: Update this record

Page navigation