Accessibility navigation


ReadingAct RGB-D action dataset and human action recognition from local features

Chen, L., Wei, H. and Ferryman, J. (2014) ReadingAct RGB-D action dataset and human action recognition from local features. Pattern Recognition Letters, 50. pp. 159-169. ISSN 0167-8655

Full text not archived in this repository.

It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.

To link to this item DOI: 10.1016/j.patrec.2013.09.004

Abstract/Summary

For general home monitoring, a system should automatically interpret people’s actions. The system should be non-intrusive, and able to deal with a cluttered background, and loose clothes. An approach based on spatio-temporal local features and a Bag-of-Words (BoW) model is proposed for single-person action recognition from combined intensity and depth images. To restore the temporal structure lost in the traditional BoW method, a dynamic time alignment technique with temporal binning is applied in this work, which has not been previously implemented in the literature for human action recognition on depth imagery. A novel human action dataset with depth data has been created using two Microsoft Kinect sensors. The ReadingAct dataset contains 20 subjects and 19 actions for a total of 2340 videos. To investigate the effect of using depth images and the proposed method, testing was conducted on three depth datasets, and the proposed method was compared to traditional Bag-of-Words methods. Results showed that the proposed method improves recognition accuracy when adding depth to the conventional intensity data, and has advantages when dealing with long actions.

Item Type:Article
Refereed:Yes
Divisions:Science > School of Mathematical, Physical and Computational Sciences > Department of Computer Science
ID Code:39833
Uncontrolled Keywords:Human action recognition, Depth sensor, Spatio-temporal local features, Dynamic time warping, ReadingAct action dataset
Publisher:Elsevier

University Staff: Request a correction | Centaur Editors: Update this record

Page navigation