Accessibility navigation


Gaussian processes autoencoder for dimensionality reduction

Jiang, X., Gao, J., Hong, X. ORCID: https://orcid.org/0000-0002-6832-2298 and Cai, Z. (2014) Gaussian processes autoencoder for dimensionality reduction. In: Part II of Proceeding 18th Pacific-Asia Conference, PAKDD 2014, May 13-16, 2014, Tainan, Taiwan.

[img]
Preview
Text - Accepted Version
· Please see our End User Agreement before downloading.

374kB

It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.

Official URL: http://dx.doi.org/10.1007/978-3-319-06605-9_6

Abstract/Summary

Learning low dimensional manifold from highly nonlinear data of high dimensionality has become increasingly important for discovering intrinsic representation that can be utilized for data visualization and preprocessing. The autoencoder is a powerful dimensionality reduction technique based on minimizing reconstruction error, and it has regained popularity because it has been efficiently used for greedy pretraining of deep neural networks. Compared to Neural Network (NN), the superiority of Gaussian Process (GP) has been shown in model inference, optimization and performance. GP has been successfully applied in nonlinear Dimensionality Reduction (DR) algorithms, such as Gaussian Process Latent Variable Model (GPLVM). In this paper we propose the Gaussian Processes Autoencoder Model (GPAM) for dimensionality reduction by extending the classic NN based autoencoder to GP based autoencoder. More interestingly, the novel model can also be viewed as back constrained GPLVM (BC-GPLVM) where the back constraint smooth function is represented by a GP. Experiments verify the performance of the newly proposed model.

Item Type:Conference or Workshop Item (Paper)
Refereed:Yes
Divisions:Science > School of Mathematical, Physical and Computational Sciences > Department of Computer Science
ID Code:39730

Downloads

Downloads per month over past year

University Staff: Request a correction | Centaur Editors: Update this record

Page navigation