Accessibility navigation


Combining 3D and 2D for less constrained periocular recognition

Chen, L. and Ferryman, J. (2015) Combining 3D and 2D for less constrained periocular recognition. In: IEEE Seventh International Conference on Biometrics: Theory, Applications and Systems (BTAS2015), September 8-11, 2015, Arlington, US, pp. 1-6.

[img]
Preview
Text - Accepted Version
· Please see our End User Agreement before downloading.

3MB

It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.

Official URL: https://ieeexplore.ieee.org/document/7358753

Abstract/Summary

Periocular recognition has recently become an active topic in biometrics. Typically it uses 2D image data of the periocular region. This paper is the first description of combining 3D shape structure with 2D texture. A simple and effective technique using iterative closest point (ICP) was applied for 3D periocular region matching. It proved its strength for relatively unconstrained eye region capture, and does not require any training. Local binary patterns (LBP) were applied for 2D image based periocular matching. The two modalities were combined at the score-level. This approach was evaluated using the Bosphorus 3D face database, which contains large variations in facial expressions, head poses and occlusions. The rank-1 accuracy achieved from the 3D data (80%) was better than that for 2D (58%), and the best accuracy (83%) was achieved by fusing the two types of data. This suggests that significant improvements to periocular recognition systems could be achieved using the 3D structure information that is now available from small and inexpensive sensors.

Item Type:Conference or Workshop Item (Paper)
Refereed:Yes
Divisions:Science > School of Mathematical, Physical and Computational Sciences > Department of Computer Science
ID Code:47385

Downloads

Downloads per month over past year

University Staff: Request a correction | Centaur Editors: Update this record

Page navigation