Accessibility navigation

Deep trajectory representation-based clustering for motion pattern extraction in videos

Boyle, J., Nawaz, T. and Ferryman, J. (2017) Deep trajectory representation-based clustering for motion pattern extraction in videos. In: 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), 29 Aug.-1 Sept. 2017, Lecce, Italy.

[img] Text - Published Version
· Restricted to Repository staff only
· The Copyright of this document has not been checked yet. This may affect its availability.


It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.

Official URL:


We present a deep trajectory feature representation approach to aid trajectory clustering and motion pattern extraction in videos. The proposed feature representation includes the use of a neural network-based approach that uses the output of the smallest hidden layer of a trained autoencoder to encapsulate trajectory information. The trajectory features are then fed into a mean-shift clustering framework with an adaptive bandwidth parameter computation to yield dominant trajectory clusters. The corresponding motion patterns are extracted based on a distance minimization from the clusters’ centroids. We show the effectiveness of the proposed approach on challenging public datasets involving traffic as well non-traffic scenarios.

Item Type:Conference or Workshop Item (Paper)
Divisions:Science > School of Mathematical, Physical and Computational Sciences > Department of Computer Science
ID Code:75242

University Staff: Request a correction | Centaur Editors: Update this record

Page navigation